Configuration and hardware of sites belonging to data federation

Please copy the following questions and fill the answers under the label corresponding to your storage system:


cms_site_name:
storage technology and version:
xrootd server version:
how many xrootd server are installed?
usage of xrootd proxy?
usage of xrootd manager (cmsd)?
head node hardware description
is there load-balancing and fail over?
network interfaces of headnode and servers:
bandwidth from xrootd head node and outside:
are there limitations in the number of allowed accesses to the system (number of queries)?
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?
are you serving multiple VOs with your xrootd setup:
are you registered in the AAA monitoring system?
other key parameters we are missing?:
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints?
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?)


castor

dcache

T2_DE_RWTH

More... Close
cms_site_name: T2_DE_RWTH
storage technology and version: dcache 2.6.28-1 (on SL 6)
xrootd server version: xrootd-3.3.1-1.2.osg.el5 (on SL 5, OS upgrade to SL 6 planed for the near future)
how many xrootd server are installed? 1
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd server: 8 cores, 16 GB RAM; dcache head node: 8 cores, 16 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: port channels with 2*1 Gb/s
bandwidth from xrootd head node and outside: port channel with 2*1 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? dCache xrootd door with maximum 1000 threads, 100 or 500 parallel active transfers per dCache pool depending on the pool
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: dates of tests are not indicated in the plots (if the dates of the attachments correspond to the dates of tests, the measurements were done during a phase of massive reorganisation of the whole dCache system), xrootd CMS TFC plugin (xrootd-cmstfc-1.5.1-6.osg.el5) on xrootd server, authenticated access to dCache xrootd door (xrootdPlugins=gplazma:gsi,authz:cms-tfc)
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4 GB

T2_DE_DESY

More... Close
cms_site_name: T2_DE_DESY
storage technology and version: dCache 2.8.4 (on SL6)
xrootd server version: 3.3.6 (SL6)
how many xrootd server are installed? 2
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd site redirectors: Virtual machines: 4CPUs @2.5GHz, 8GB; dCache doors&heads physical machines
is there load-balancing and fail over? Yes, two xrootd site redirectors, xrootd doors behind a loadbalanced DNS alias
network interfaces of headnode and servers: Mix of 10Gb and 1Gb
bandwidth from xrootd head node and outside: xrootd site redirector 1Gb, site has 2x10Gb
are there limitations in the number of allowed accesses to the system (number of queries)? Number of parallel movers limited to 200 per pool
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: No, ATLAS has own redirector and dCache
are you registered in the AAA monitoring system? No detailed monitoring yet
other key parameters we are missing?: xrootd CMS TFC plugin on xrootd site redirector, authenticated access to dCache xrootd doors (xrootdPlugins=gplazma:gsi,authz:cms-tfc)
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4GB (or more)

T2_IT_LEGNARO

More... Close
cms_site_name: T2_IT_LEGNARO
storage technology and version: dCache 2.6.33 (ns=Chimera)
xrootd server version: xrootd-3.3.1-1.2.osg.el5 on SL5
how many xrootd server are installed? one
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description : Virtual Machine configured with 4 vcores and 4 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: headnode 1Gb/s, dcache pool servers 10Gb/s
bandwidth from xrootd head node and outside: 1 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? max 1000 thread on xrootd door, the limit of active transfers on the pools varies (200-500 per server)
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? symlink
are you serving multiple VOs with your xrootd setup: CMS only
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space already set to 2 GB

T3_CH_PSI

More... Close
cms_site_name: T3_CH_PSI
storage technology and version: dCache 2.6.33 (ns=Chimera)
xrootd server version: xrootd-3.3.6-1.slc6.x86_64
how many xrootd server are installed? one
usage of xrootd proxy? no, we use https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootd
usage of xrootd manager (cmsd)? yes
head node hardware description : a simple VMWare VM, 8GB RAM, 4 vcores
is there load-balancing and fail over? no, but the VM will be quickly and automatically restarted if the host where is running will fail
network interfaces of headnode and servers: headnode VM uses one 100Mbit/s ; the 11 dCache pool servers have 4*1Gbit/s in LACP trunk while 2 dCache pool servers have a 10Gbit/s.
bandwidth from xrootd head node and outside: 100Mbit/s
are there limitations in the number of allowed accesses to the system (number of queries)? we allow two parallel xrootd requests per single dCache pool server, the other xrootd requests get queued waiting for one of those two slots.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? N/A
are you serving multiple VOs with your xrootd setup: no, just CMS
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: if that matters the dCache xrootd door authenticates and authorizes each xrootd request by xrootdPlugins=gplazma:gsi,authz:cms-tfc ; no anonymous access
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes

T1_DE_KIT

More... Close
storage version: dCache 2.6.34
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 1
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd server: VMWAre VM, 2GB RAM, 1 vcpu
is there load-balancing and fail over: no
network interfaces of headnode and servers: The dCache pools all have a 10 GE interface.
bandwidth from xrootd head node and outside: Bandwidth for xrootd manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache, but active transfers are limited to a certain number (right now its 25 per dCache pool), any more are queued.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: no, yes (potentially)!
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Don't know any recommendations. This TWiki tells us nothing new.

T1_ES_PIC

More... Close
storage version: dCache 2.6.31 for the dcache xrootd doors
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 2
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd server: 2 VMs, 2 cores, 2 GB of RAM and 20 GB disk space each
is there load-balancing and fail over: yes, both between xrootd local servers and xrootd dcache doors
network interfaces of headnode and servers: The dCache pools all have a 10 GE interface.
bandwidth from xrootd head node and outside: Bandwidth for xrootd manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache. Simultaneous active transfers are currently limited to 500 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: yes, ATLAS, but with on different disk pools and dcache doors. Interference only possible at the level of network.
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Yes.

T1_FR_CCIN2P3

More... Close
cms_site_name: T1_FR_CCIN2P3
storage technology and version: dCache, version 2.6.34 on core servers, 2.6.36 on pool servers.
xrootd server version: 3.3.6 running on SL6.5.
how many xrootd server are installed? One single server for both CMS T1 & T2 used as a xrootd proxy server.
usage of xrootd proxy? Yes
usage of xrootd manager (cmsd)? Yes
head node hardware description:
is there load-balancing and fail over? No
network interfaces of headnode and servers: 2x1Gb/s (bonded links)
bandwidth from xrootd head node and outside:
are there limitations in the number of allowed accesses to the system (number of queries)? No (only restricted by the bandwidth available on the server).
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system? Yes
other key parameters we are missing?: Shared installation for both T1_FR_CCIN2P3 and T2_FR_CCIN2P3
have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootdOld
Yes.

T2_FR_CCIN2P3

More... Close
cms_site_name: T1_FR_CCIN2P3 storage technology and version: dCache, version 2.6.34 on core servers, 2.6.36 on pool servers.
xrootd server version: 3.3.6 running on SL6.5.
how many xrootd server are installed? One single server for both CMS T1 & T2 used as a xrootd proxy server.
usage of xrootd proxy? Yes
usage of xrootd manager (cmsd)? Yes
head node hardware description:
is there load-balancing and fail over? No
network interfaces of headnode and servers: 2x1Gb/s (bonded links)
bandwidth from xrootd head node and outside:
are there limitations in the number of allowed accesses to the system (number of queries)? No (only restricted by the bandwidth available on the server).
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system? Yes
other key parameters we are missing?: Shared installation for both T1_FR_CCIN2P3 and T2_FR_CCIN2P3
have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootdOld
Yes.

T2_ES_CIEMAT

More... Close
cms_site_name: T2_ES_CIEMAT
storage technology and version: dcache-2.6.31-1 (on SL 5)
xrootd server version: Federation host: xrootd-3.3.1-1.2.osg.el6.x86_64 (on SL 6) DCache door: native code (dcache-2.6.31-1)
how many xrootd server are installed? 1 federation host. 1 door (planning to increase to 2). But there are xrootd movers on every dCache pool (37 servers currently)
usage of xrootd proxy? No
usage of xrootd manager (cmsd)? Yes
head node hardware description Federation host: Virtual Machine, 2 cores, 8 GB RAM. DCache door (also pool): 16 cores, 12 GB RAM
is there load-balancing and fail over? No (plannig to have when second door is installed)
network interfaces of headnode and servers: Federation host (shared with other VMs): up to 1 x 10 Gbps. Door: 2 x 1 Gbps. Pools: 2/3 x 1 Gbps
bandwidth from xrootd head node and outside: Federation host: 1 Gbps. Door: 2 Gbps. Pools: 2/3 Gbps
are there limitations in the number of allowed accesses to the system (number of queries)? dCache xrootd door with maximum 10 threads (planning to increase this value)
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VO's with your xrootd setup: No
are you registered in the AAA monitoring system? Yes
other key parameters we are missing?: xrootd CMS TFC plugin (xrootd-cmstfc-1.5.1-6.osg.el6.x86_64) on federation host, GSI-authenticated access to dCache xrootd door
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? Not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) Not really but settings are not so bad. No parameter "cms.dfs" in configuration; java heap space 1 GB; max xrootd movers is 5

T2_CH_CSCH

More... Close
cms_site_name: T2_CH_CSCS
storage technology and version: dCache 2.6.27
xrootd server version: 3.3.1
how many xrootd server are installed? 1
usage of xrootd proxy? Yes
usage of xrootd manager (cmsd)? Yes
head node hardware description: dCache: 2x IBM M4, 2x E5-2643 (3.3Ghz), 64GB RAM, 600GB local disk (RAID 1 SAS), xrootd: KVM 2x 2.6 GHz CPU, 12GB RAM
is there load-balancing and fail over? dCache cells are split between the two headnodes, no load-balancing or failover
network interfaces of headnode and servers: Mellanox QDR Infiniband bridged to 20Gb/s uplink
bandwidth from xrootd head node and outside: 10Gb shared vNIC
are there limitations in the number of allowed accesses to the system (number of queries)? Yes limits of 2000 or 400 threads in xrootd
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? Not currently implemented due to bug in dCache which broke symlinks in /pnfs, will be fixed in upcomming downtime by upgrading dCache
are you serving multiple VOs with your xrootd setup: Only CMS, ATLAS is working on this
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: N/A
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) No, movers and JVM heap sizes are tunned as per reuqired based upon production load.

T1_RU_JINR

More... Close
storage version: dCache 2.6.34
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 1
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd door: 1 VM, 4GB RAM, 3 vcpu; xrootd server: 1 VM, 2GB RAM, 2 vcpu
is there load-balancing and fail over: no
network interfaces of headnode and servers: The dCache pools all have a 10 GE interface.
bandwidth from xrootd head node and outside: Bandwidth for xrootd server and manager is irrelevant, since no data is tunneled through it.
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache, but active transfers are limited to a certain number, right now its 500-1000 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): yes.

T2_RU_JINR

More... Close
storage version: dCache 2.6.34
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 1
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd door: 1 VM, 2GB RAM, 2 vcpu; xrootd server: 1 VM, 2GB RAM, 2 vcpu
is there load-balancing and fail over: no
network interfaces of headnode and servers: The dCache pools all have a 2x1GbE trunk.
bandwidth from xrootd head node and outside: Bandwidth for xrootd server and manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache, but active transfers are limited to a certain number, right now its 500 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: yes
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): yes.

T2_KR_KNU

More... Close
cms_site_name: T2_KR_KNU
storage technology and version: dCache 2.6.34 (SL6)
xrootd server version: xrootd 3.3.6-1.1 (SL6)
how many xrootd server are installed? 1
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description dcache head node: 8 cores, 24 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: headnode: 10Gbps, dcache pool servers: 1~10Gbps
bandwidth from xrootd head node and outside: Bandwidth for xrootd manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache. but active transfers are limited to 500 ~ 1000 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? symlink
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 2 GB

dpm

T2_FR_GRIF_LLR

More... Close
cms_site_name: T2_FR_GRIF_LLR
storage technology and version: dpm v1.8.8-4 (on SL 6)
xrootd server version: xrootd v3.3.6-1 (on SL 6)
how many xrootd server are installed? 1 xrootd head node and 19 disk servers
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd and dpm are collocated on the same server: server: 4 cores (Intel E5506), 8 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 10 Gb Ethernet
bandwidth from xrootd head node and outside: 10 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? NB_XTHREADS parameters ans system limits have been re-configured to fit the sugestions in the Tuning wiki.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: yes
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes, implementing changes on the 15/10/2014
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) not applicable

T2_FR_IPHC

More... Close
cms_site_name: T2_FR_IPHC
storage technology and version: dpm v1.8.8-4 (on SL 6)
xrootd server version: xrootd v3.3.6-1 (on SL 6)
how many xrootd server are installed? 1 xrootd head node and 10 disk servers
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd and dpm are collocated on the same server: server: 8 cores (E5640), 12 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 10 Gb Ethernet
bandwidth from xrootd head node and outside: 10 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? FTHREADS and STHREADS are set as described on the Dpm/Admin/TuningHints page and we have the following parameters in security/limits.conf: "dpmmgr soft nproc unlimited"
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: We are using the DPM/XRootD plugin
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes (the services have been restarted, but not the systems)
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) not applicable

T2_AT_Vienna

More... Close
cms_site_name: T2_AT_Vienna
storage technology and version: dpm v1.8.8-4 (on SL 6)
xrootd server version: xrootd v3.3.6-1 (on SL 6)
how many xrootd server are installed? 1 xrootd head node and 10 disk servers
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd and dpm are collocated on the same server: server: 8 cores (Intex Xeon L5240), 16 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 1 Gb Ethernet on head, most of the data 10 Gb
bandwidth from xrootd head node and outside: Outside 10 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? FTHREADS and STHREADS are set as described on the Dpm/Admin/TuningHints page and we have the following parameters in security/limits.conf: "dpmmgr soft nproc unlimited"
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: yes(
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: We are using the DPM/XRootD plugin
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes (the services have been restarted, but not the systems)

storm

T2_IT_Pisa

More... Close
cms_site_name: T2_IT_Pisa
storage technology and version: storm 1.11.4 (on sl6)
xrootd server version: xrootd.3.3.4-1 on sl6
how many xrootd server are installed? 2
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? no
head node hardware description no head node
is there load-balancing and fail over? yes
network interfaces of headnode and servers: each server has a 10 Gbit/s connection
bandwidth from xrootd head node and outside: each server has a 10 Gbit/s connection
are there limitations in the number of allowed accesses to the system (number of queries)? thread number set to 4096 per server
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:

T2_ES_IFCA

More... Close
cms_site_name: T2_ES_IFCA
storage technology and version: storm 1.11.3 (on sl6)/GPFS 3.5
xrootd server version: xrootd.3.3.6-1 on sl6
how many xrootd server are installed? 8
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? no
head node hardware description no head node
is there load-balancing and fail over? yes
network interfaces of headnode and servers: each server has shared 10 Gbit/s connection
bandwidth from xrootd head node and outside: each server has shared 10 Gbit/s connection
are there limitations in the number of allowed accesses to the system (number of queries)? thread number set to 1024 per server
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:

Hadoop/BeSTMan

T2_US_Purdue

More... Close
cms_site_name: T2_US_Purdue
storage technology and version: Hadoop 2.0: hadoop-2.0.0+545-1.cdh4.1.1.p0.20.osg32.el6.x86_64
xrootd server version: Xrootd 3.3.6: xrootd-3.3.6-1.1.osg32.el6.x86_64
how many xrootd server are installed?: 90
usage of xrootd proxy?: No
usage of xrootd manager (cmsd)?: Yes
head node hardware description: 16-cores, AMD Opteron(tm) Processor 4280, 16 GB RAM
is there load-balancing and fail over?: No
network interfaces of headnode and servers: 10G
bandwidth from xrootd head node and outside: head node is 10G, WAN is 100G
are there limitations in the number of allowed accesses to the system (number of queries)?: Number of open files is limited to 61440
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system?: Yes
other key parameters we are missing?: Site is using RHEL6.5 as OS on all the nodes. Implemented following kernel tweaks.
1. Disabled transparent Huge Pages Compaction: 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag'
2. Changed value of vfs_cache_pressure from 100 to 10: 'echo 10 > /proc/sys/vm/vfs_cache_pressure'

LStore/BeStMan

Lustre/BeSTMan

-- FedericaFanzago - 22 Sep 2014

Edit | Attach | Watch | Print version | History: r23 < r22 < r21 < r20 < r19 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r22 - 2014-11-25 - SebastienGadrat
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback