NFS 4.1 Working Group

This working group is evaluating the open NFS 4.1 (pNFS) protocol as the generic POSIX access for EMI data sources and applications. Partners involved are DESY, CERN and CNAF/INFN.

Common Documents

Where What When Who What
CERN GDB Jan 2011 GDB Patrick NFS 4.1 at GDB; End of demonstrator effort
Cornell Hepix, Fall Nov 2010 Patrick NFS4 at HEPIX, Fall, in Cornell
Taipei CHEP'10 2010 Yves Kemp NFS 4.1 at CHEP'10
CERN Oct GDB Oct 2010 Patrick NFS4 at the Oct GDB 2010. Milestone II report
Amsterdam Jamboree July 2010 Gerd Behrmann / NFS 4.1, 11 Reasons you should care
London WLCG Collaboration Workshop June 2010 Patrick, Jean-Philippe for WLCG and EMI Introduction of Demonstrator

DESY Test System setup

Task People
Testbed Dima and Yves
pNFS kernel and driver Tigran
NFS4.1 dCache server Tigran and Tanja
Hammercloud support Johannes Elmsheuser, Atlas, Munich

  • dCache server : In order to allow fast turnover, the recent NFS4.1 code tested here, is not yet checked into the official trunk of the dCache code management system. Therefore, there is no official dCache version available yet, with the features tested here. The plan is to make such a system publicly available before CHEP 10.
  • pNFS client : We are running the 2.6.35 kernel with pNFS driver, prepared for SL5 plus the corresponding mount tools on SL5 workernodes.
  • Security :
    • All tests are done w/o integrity and encryption.
    • Beta version of Kerberos code available but not yet sufficiently tested.

Storage and CPU Power
Amount Type CPU RAM Cores Network Disk
1 dCache Headnode Intel(R) Xeon(R) CPU 5160 @ 3.00GHz 8GB 4 1 GBit 0
5 dCache Pool Intel(R) Xeon(R) CPU 5520 @ 2.27GHz 12 GB 16 10 GBit 12 * 2 Tbytes
16 or 32 Workernodes Intel(R) Xeon(R) CPU 5150 @ 2.66GHz 4 GB 8 1 GBit 0

WN ← 1Gbit → Force 10 ← 4 * 10 GBIT → Arista ← 10 GBit → dCache pools

Test type What Time / Amount Result Detail
Stability CFEL data transfers 10 days with 13 TBytes sustained writing with 100GB av. filesize Passed OK
Stability CFEL checkusm 13 TBytes; one machine Slow Very old client machine
Performance Hammercloud 128 cores against 100 TBytes Still ongoing Performance results are evaluated

-- PatrickFuhrmann - 02-Sep-2010

Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r5 - 2011-02-02 - PatrickFuhrmann
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback