xrootd scalability tests
All the test plans and results of the xrootd scalability test will be reported in this wiki page.
Test plans
first series of tests
Test results :
Test Series 1: write <N> clients concurrently 1 Mb files with xrdcp (100 clients per client host) using no authentication mechanism
In Test Series 1 we have deployed CVS version 08.01.2008 with one manual bug fix in
XrdCmsClientMan.
We use 2000 clients (20 hosts a 100 clients).
The setup needs 2 RPMs and one configuration file on each machine:
xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
all.export /xns/ r/w
if named server
oss.localroot /var/spool/xroot/
oss.cache public /shift/xns/
fi
sec.protocol /opt/xrootd/lib unix
all.role server
all.role manager if lxb8971.cern.ch
all.manager lxb8971.cern.ch:1213
all.adminpath /var/spool/xroot/
cms.allow host *.cern.ch
cms.delay startup 1 suspend 10
cms.space linger 0 recalc 10 min 1g 2g
cms.request repwait 5
xrd.sched mint 8 maxt 1024 avtl 128 idle 256
#xrootd.async off
oss.fdlimit 16384 32768

The limitiation to 270 files/s is induced by the forced delay for every file creation of 5s per write. Theoretical maximum is for this test 400 files/s.
Test Series 2: read <N> clients concurrently 1 Mb files with xrdcp (100 clients per client host) using no authentication mechanism
Same configuration as for write, but reading back all files this time with 1000 clients .

We can read at a maximum rate of 1200 files/s. The performance was decreasing in further tests to 800 files/s (due to limitations on the disk server (random) io bandwidth ?).
The file locations are in this case already cached in the redirector memory.

The average time to read a file is 1.128 s per 1Mb file. The tail to longer transfer times is quite steep and narrow.
Test Series 2: tests with xrootd and namespace plugin + kerberos authentication
We have configured on the head node a catalogfs plugin. The namespace is stored in an XFS filesystem, which was created inside a memory file residing inside a tmpfs filesystem.
For a reallive setup we would need a SSD or Flash drive. File metadata (like user/group/size/permissions/mtime etc.) is stored as extended attributes on the XFS filesystem.
The kerberos authentication rate is 25 authentications/s. E.g. for xrdcp we can handle 25 requests per second with 10% usage on the head node. Unfortunately the authentication plugin has a kind of
global mutex, therefore we see a hard limit although there is plenty of CPU left.
Using ROOT applications as clients (where one has only 1 authentication in the beginning) we can handle 170 reads/s with a full security framework. Disabling headnode to diskserver token signing we can reach 900 read/s. Although in this encryption is a global mutex which could be removed after careful modification of the code.
A single ROOT client can open with the full authentication/authorization chain 50 files/s. WIthout head-node to disk-server signature 100 files/s.
The CPU on the headnode never exceeds 20%.
The inmemory filesystem was limited to 2GB which could store 435.000 files.
Test Series 3: kerberos authentication measurements
8 individual clients -> 8 server on 8core machine
0-copy-operation no auth: 8 x 40 files/s = 320 files/s - server 7% CPU
0-copy-operation auth :8 x 5 files/s = 40 files/s - server 80% CPU 8 individual clients -> 1 server on 8core machine
0-copy-operation no auth: 8 x 40 files/s = 320 files/s - server 7% CPU
0-copy-operation auth : 8 x 3 files/s = 24 files/s - server 14% CPU
Test Series 4: GSI authentication measurements
8 individual clients-> 8 server on 8core machine
0-copy-operation auth : 8 x 7 files/s = 56 files/s - server 60% CPU
8 individual clients-> 1 server on 8core machine
0-copy-operation auth : 8 x 1 file/s = 8 files/s - server 32% CPU
--
LanaAbadie - 28 Feb 2008