Quota
Reset Quota on Disk Server
xfs_quota -x -c 'off -u' /dev/md0
xfs_quota -x -c 'off -g' /dev/md0
xfs_quota -x -c 'remove ' /dev/md0
umount /cfs/fs20/
mount -orw,noatime,uquota,gquota,logbsize=32768,logbufs=8,inode64,swalloc /dev/md0 /cfs/fs<xx>/
Set Default Quota on a Disk Server for Groups
xfs_quota -x -c 'limit -g bsoft=0 bhard=0 isoft=0 ihard=0 -d' /dev/md0
Set Default Quota in a Disk Cluster for Groups
wassh -p 10 -l root -c fileserver/dmfstests "/afs/cern.ch/project/dm/xcfs/scripts/default_group_quota_0.sh"
CHANGELOG
xrootd-catalogfs:
3.0.1
- xcfs-admin can set default quota with 'xcfs-admin quota set default 0 0 0 0'
3.0.2
- changing user/group scheme to use uid/gid internally instead of names (affects the xfs_quota report and the book keeping of spaces)
3.0.3
- added support for 'spaces':
- filesystems can be grouped into spaces
- introduced grid-mapfile & voms-mapfile
- clients can select the authentication method to be used (e.g. krb5 or gsi)
- added 'whoami' + 'filesystems' command to xcfs
3.0.4
3.0.5
- added support for in-space replication and between-space replication 'addreplica'
- added support for 'dropreplica', 'shrinkreplica'
- added support for filesystem drain
- FS states: offline/RO (drained) /RW
3.0.6
3.0.7
- tested with 744 filesystems and 90 users and changed the XfsPoller? implementation. A full poller loop over 744 filesystems is done now in 42s.
- fixed xcfs-admin + xcfs to work with different xcfs root/proc directories
3.0.8
- included on-the-fly trace method which can be set via xrdcp
- included support for authorization plugin (e.g. AliceTokenAcc? can be loaded)
3.0.9 + 3.0.10
- Introduced policyfiles
- flexible interface to define how many replicas have to be created
- define by path
- define by user,group,space
- flexible interface to
- map a user/group into a space
- map a path into a space
- define a spaceuser e.g. quota is booked on a single spaceuser and accessible to all
- felxible interface to
- define garbagecollection policies
- 'find' base garbage collection
- e.g. time based garbage collection
- suitable for multi-user quota spaces
- scratch functionality (wipe out the files older than 3 days)
- volume based garbage collection
- suitable in spaces which define a spaceuser
-
catalog.replicapolicyfile
###############################################################################################
# * replica policies are defined by path
# * the policy with the deepest path match applies
# * the defined policies are:
# - default=<n> - everything under this path get's <n> replicas
# - *.<suffix>=<n> - every file which ends with .<suffix> get's <n> replicas
# - user:<username>=<n> - the user <username> get's <n> replicas
# - group:<groupname>=<n> - the group <groupname> get's <n> replicas
# - space:<spacename>=<n> - if the space <spacename> is used, <n> replicas are created
###############################################################################################
/cfs/cfs1/namespace/xcfs/cern.ch/ha/ default=2,*.log=1,*.txt=1
/ space:default=1,space:t1=2,space:t2=2,space:t3=2
-
catalog.spacemapfile
###############################################################################################
# * space policies are defined by path
# * the policy with the deepest path match applies
# * the defined policies are:
# - default=<n> - everything under this path goes to space <n>
# - *.<suffix>=<n> - every file which ends with .<suffix> goes to space <n>
# - user:<username>=<n> - the user <username> is mapped to space <n>
# - group:<groupname>=<n> - the group <groupname> is mapeed to space <n>
# - spaceuser:<spaceuser>=<n> - all space reservation accounts on <username> in space <n>
###############################################################################################
/cfs/cfs1/namespace/xcfs/cern.ch/ha/ default=t1,*.txt=default
/cfs/cfs1/namespace/xcfs/cern.ch/spaceuser/ spaceuser:nobody=t1
-
catalog.gcspacefile
################################################################################
# garbage collection policies defined by space
################################################################################
# time based garbage collection
default cmd="find SPACE -amin -7200 -exec usleep 1000 \; -exec echo {} \;",frequency=60
# volume based garbage collection
default low=75,high=95,frequency=60,usleep=1000
t1 low=75,high=95,frequency=60,usleep=1000
t2 low=75,high=95,frequency=60,usleep=1000
-
catalog.archivepolicyfile
################################################################################
# archiving policies defined by space
################################################################################
# archiving policies
# syntax: <space> <key1=val1>,<key2=val2>,.....
# keys have to define:
# --- cmd="xcfs-stageout" or cmd="xcfs-aggregate"
# ---------xcfs-stageout: every file matching the rule is staged out using the xcfs-stageout command
# ---------xcfs-aggregate: every file matching the rule is aggregated and an aggregation is backuped once the minimum size requirement is fullfilled
# --- unchanged=<n seconds> : the backup is scheduled <n seconds> later - if the file didn't change in the mean while it will be backed up
# --- minsize=<byte> : the file has to be atleast <byte> bytes in size to match the rule
# --- maxsize=<byte> : the file has to be <= <byte> bytes to match the rule
# ----args="...." :
# you should defined here for xcfs-stageout and xcfs-aggregate:
# map=<xcfs path>=><backup medium path> to translate the local and backup medium path
# mode=<mode> to define the backup medium mode
# env=<key>=<val>; needed env for the <cmd> program to work
# for xcfs-aggregate additionally:
# minsize=<minimum archive size> if the aggregation reaches this size, an archive is created
# maxsize=<maximum archive size> the aggregation can never be bigger than this value
# if a single file doesn't fit, it is removed from the aggregation
# ----keepreplica=<n> : if a file was succesfully backed up the replicas are shrinked to <n>. If <n>="*" the original number of replicas is kept
#
t1 cmd="xcfs-stageout",args="map=/xcfs/cern.ch/spaceuser/=>/castor/cern.ch/user/a/apeters/archive/;mode=644;env=STAGE_HOST=castoralice;env=STAGE_SVCCLASS=default;env=RFIO_USE_CASTOR_V2=YES;",unchanged=60,minsize=1024,maxsize=6000000,exclusive,keepreplica=1
default cmd="xcfs-aggregate",args="map=/xcfs/cern.ch/ha=>/castor/cern.ch/user/a/apeters/archive/;minsize=50000000;maxsize=4294967296;archive=/castor/cern.ch/user/a/apeters/archive/;mode=644;env=STAGE_HOST=castoralice;env=STAGE_SVCCLASS=default;env=RFIO_USE_CASTOR_V2=YES;",unchanged=60,minsize=1024,maxsize=6000000,exclusive,keepreplica=*
- Introduced 'grouplable=<file grouping tag>
- the grouplable is pinned into the trusted ext. attributes of the namespace entry
- Archiving Policies
- simple 'write new file -> back it up' logic
- after a backup the number of replicas can be shrinked to a dynamic value to save space
- aggregation policy
- catch file of certain size and aggregate them into archives
xrdsslsec:
1.0.1
- fixed configure scripts to allow to compile with xrootd source compilation tree
xrootd:
xcfsfs - FUSE plugin
1.4.0
- introduced write-back cache
1.5.0
- automatically disabling read-ahead for files opened in rw mode since cache inconsistancies can occur
- fixing the stdout/stderr redirection in /etc/init.d/xcfsfslinkd
2.0.0
- new version with multi-user support
- bug fix for filenames with space characters
GridFTP Gateway Setup Test
The standard globus-gridftp-server can be started using /etc/init.d/fs-gridftp to export a
XCFS mount. The grid-mapfile needs the proper mapping to the
XCFS UID. Basic functional tests like file up- and download and the proper authorization work without problems.
xrfcp
1.0.2
- added support to write range files
- symbolic links with the destination syntax "path <startbyte>:<stopbyte>" are interpreted as range links into archives
RPMS
/afs/cern.ch/project/dm/xcfs/rpms/xrootd-CVS20080517_pext-8.x86_64.rpm
/afs/cern.ch/project/dm/xcfs/rpms/xrootd-catalogfs-3.0.6-1.x86_64.rpm
/afs/cern.ch/project/dm/xcfs/rpms/xcfsfs-2.0.0-4.x86_64.rpm
/afs/cern.ch/project/dm/xcfs/rpms/fuse-2.7.3-kernel_2.6.9_67.0.20.EL.cernsmp_6.x86_64.rpm
Performance Table
Version |
Date |
Type |
Performance |
CPU |
Configuration |
Limitation |
|
|
|
3.0.10 |
02.10.2008 |
Write Single Client |
150/s |
|
Space with 1 Disk Server - shared quota |
latency |
|
|
|
3.0.10 |
02.10.2008 |
Write 10 Clients |
220/s |
|
Space with 1 Disk Server - shared quota |
seems DRBD or Disk |
|
|
|
3.0.10 |
02.10.2008 |
Read Single Client |
395/s |
|
Space with 1 Disk Server - shared quota |
|
|
|
|
3.0.10 |
02.10.2008 |
Read 10 & 15 clients |
2500/s |
150% |
Space with 1 Disk Server - shared quota |
|
|
|
|
n.n. |
|
Stat |
12.500/s |
|
|
|
|
|
|
n.n. |
|
Tot. Cmds |
15.000/s |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
FUSE Mount Performance

--
AndreasPeters - 31 Jul 2008