xrootd scalability tests

All the test plans and results of the xrootd scalability test will be reported in this wiki page.

Test plans

first series of tests

Test results :

Test Series 1: write <N> clients concurrently 1 Mb files with xrdcp (100 clients per client host) using no authentication mechanism

In Test Series 1 we have deployed CVS version 08.01.2008 with one manual bug fix in XrdCmsClientMan.
We use 2000 clients (20 hosts a 100 clients).
The setup needs 2 RPMs and one configuration file on each machine:

xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
all.export /xns/ r/w

if named server
oss.localroot /var/spool/xroot/
oss.cache public /shift/xns/

sec.protocol /opt/xrootd/lib unix
all.role server
all.role manager if lxb8971.cern.ch

all.manager lxb8971.cern.ch:1213

all.adminpath /var/spool/xroot/

cms.allow host *.cern.ch
cms.delay startup 1 suspend 10
cms.space linger 0 recalc 10 min 1g 2g
cms.request repwait 5

xrd.sched mint 8 maxt 1024 avtl 128 idle 256

#xrootd.async off

oss.fdlimit 16384 32768

The limitiation to 270 files/s is induced by the forced delay for every file creation of 5s per write. Theoretical maximum is for this test 400 files/s.

Test Series 2: read <N> clients concurrently 1 Mb files with xrdcp (100 clients per client host) using no authentication mechanism

Same configuration as for write, but reading back all files this time with 1000 clients .
We can read at a maximum rate of 1200 files/s. The performance was decreasing in further tests to 800 files/s (due to limitations on the disk server (random) io bandwidth ?).
The file locations are in this case already cached in the redirector memory.

The average time to read a file is 1.128 s per 1Mb file. The tail to longer transfer times is quite steep and narrow.

Test Series 2: tests with xrootd and namespace plugin + kerberos authentication

We have configured on the head node a catalogfs plugin. The namespace is stored in an XFS filesystem, which was created inside a memory file residing inside a tmpfs filesystem.
For a reallive setup we would need a SSD or Flash drive. File metadata (like user/group/size/permissions/mtime etc.) is stored as extended attributes on the XFS filesystem.

The kerberos authentication rate is 25 authentications/s. E.g. for xrdcp we can handle 25 requests per second with 10% usage on the head node. Unfortunately the authentication plugin has a kind of
global mutex, therefore we see a hard limit although there is plenty of CPU left.
Using ROOT applications as clients (where one has only 1 authentication in the beginning) we can handle 170 reads/s with a full security framework. Disabling headnode to diskserver token signing we can reach 900 read/s. Although in this encryption is a global mutex which could be removed after careful modification of the code.
A single ROOT client can open with the full authentication/authorization chain 50 files/s. WIthout head-node to disk-server signature 100 files/s.
The CPU on the headnode never exceeds 20%.
The inmemory filesystem was limited to 2GB which could store 435.000 files.

-- LanaAbadie - 28 Feb 2008

Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg readratex.jpg r1 manage 20.1 K 2008-02-28 - 13:57 AndreasPeters  
JPEGjpg transfertimex.jpg r1 manage 39.3 K 2008-02-28 - 14:00 AndreasPeters  
JPEGjpg writerate.jpg r1 manage 26.2 K 2008-02-28 - 13:50 AndreasPeters  
Microsoft Word filedoc xrootd_scalability_test_plan_v3.doc r1 manage 30.5 K 2008-02-28 - 11:37 LanaAbadie  
Edit | Attach | Watch | Print version | History: r6 | r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r2 - 2008-02-28 - AndreasPeters
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback