Difference: Sites_setup (1 vs. 23)

Revision 232016-10-11 - JohanHenrikGuldmyr

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 27 to 27
 

castor

dcache

Added:
>
>

T2_FI_HIP

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_FI_HIP
storage technology and version: dcache-2.15
xrootd server version: 4.4
how many xrootd server are installed? 2
usage of xrootd proxy? yes
usage of xrootd manager (cmsd)? yes
head node hardware description: virtual machines. 2 cores 4800MB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 2x1Gb shared
bandwidth from xrootd head node and outside: 2x1Gb shared
are there limitations in the number of allowed accesses to the system (number of queries)? 2 workers and 4 workerthreads
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? tfc
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes
<!--/twistyPlugin-->
 

T2_DE_RWTH

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_DE_RWTH

Revision 222014-11-25 - SebastienGadrat

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 147 to 147
 

T1_FR_CCIN2P3

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T1_FR_CCIN2P3
Changed:
<
<
storage technology and version: dCache, version 2.6.34 on core servers, 2.6.29+ on pool servers.
>
>
storage technology and version: dCache, version 2.6.34 on core servers, 2.6.36 on pool servers.
 xrootd server version: 3.3.6 running on SL6.5.
how many xrootd server are installed? One single server for both CMS T1 & T2 used as a xrootd proxy server.
usage of xrootd proxy? Yes
Line: 168 to 168
 

T2_FR_CCIN2P3

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T1_FR_CCIN2P3
Changed:
<
<
storage technology and version: dCache, version 2.6.34 on core servers, 2.6.29+ on pool servers.
>
>
storage technology and version: dCache, version 2.6.34 on core servers, 2.6.36 on pool servers.
 xrootd server version: 3.3.6 running on SL6.5.
how many xrootd server are installed? One single server for both CMS T1 & T2 used as a xrootd proxy server.
usage of xrootd proxy? Yes

Revision 212014-10-28 - ChristophWissing

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 47 to 47
 (for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4 GB
<!--/twistyPlugin-->
Added:
>
>

T2_DE_DESY

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_DE_DESY
storage technology and version: dCache 2.8.4 (on SL6)
xrootd server version: 3.3.6 (SL6)
how many xrootd server are installed? 2
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd site redirectors: Virtual machines: 4CPUs @2.5GHz, 8GB; dCache doors&heads physical machines
is there load-balancing and fail over? Yes, two xrootd site redirectors, xrootd doors behind a loadbalanced DNS alias
network interfaces of headnode and servers: Mix of 10Gb and 1Gb
bandwidth from xrootd head node and outside: xrootd site redirector 1Gb, site has 2x10Gb
are there limitations in the number of allowed accesses to the system (number of queries)? Number of parallel movers limited to 200 per pool
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: No, ATLAS has own redirector and dCache
are you registered in the AAA monitoring system? No detailed monitoring yet
other key parameters we are missing?: xrootd CMS TFC plugin on xrootd site redirector, authenticated access to dCache xrootd doors (xrootdPlugins=gplazma:gsi,authz:cms-tfc)
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4GB (or more)
<!--/twistyPlugin-->
 

T2_IT_LEGNARO

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_IT_LEGNARO

Revision 202014-10-24 - DaeHeeHan

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 246 to 246
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): yes.
<!--/twistyPlugin-->
Added:
>
>

T2_KR_KNU

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_KR_KNU
storage technology and version: dCache 2.6.34 (SL6)
xrootd server version: xrootd 3.3.6-1.1 (SL6)
how many xrootd server are installed? 1
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description dcache head node: 8 cores, 24 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: headnode: 10Gbps, dcache pool servers: 1~10Gbps
bandwidth from xrootd head node and outside: Bandwidth for xrootd manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache. but active transfers are limited to 500 ~ 1000 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? symlink
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 2 GB
<!--/twistyPlugin-->
 

dpm

T2_FR_GRIF_LLR

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Revision 192014-10-20 - SebastienGadrat

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 124 to 124
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Yes.
<!--/twistyPlugin-->
Added:
>
>

T1_FR_CCIN2P3

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T1_FR_CCIN2P3
storage technology and version: dCache, version 2.6.34 on core servers, 2.6.29+ on pool servers.
xrootd server version: 3.3.6 running on SL6.5.
how many xrootd server are installed? One single server for both CMS T1 & T2 used as a xrootd proxy server.
usage of xrootd proxy? Yes
usage of xrootd manager (cmsd)? Yes
head node hardware description:
is there load-balancing and fail over? No
network interfaces of headnode and servers: 2x1Gb/s (bonded links)
bandwidth from xrootd head node and outside:
are there limitations in the number of allowed accesses to the system (number of queries)? No (only restricted by the bandwidth available on the server).
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system? Yes
other key parameters we are missing?: Shared installation for both T1_FR_CCIN2P3 and T2_FR_CCIN2P3
have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootdOld
Yes.
<!--/twistyPlugin-->

T2_FR_CCIN2P3

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T1_FR_CCIN2P3 storage technology and version: dCache, version 2.6.34 on core servers, 2.6.29+ on pool servers.
xrootd server version: 3.3.6 running on SL6.5.
how many xrootd server are installed? One single server for both CMS T1 & T2 used as a xrootd proxy server.
usage of xrootd proxy? Yes
usage of xrootd manager (cmsd)? Yes
head node hardware description:
is there load-balancing and fail over? No
network interfaces of headnode and servers: 2x1Gb/s (bonded links)
bandwidth from xrootd head node and outside:
are there limitations in the number of allowed accesses to the system (number of queries)? No (only restricted by the bandwidth available on the server).
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system? Yes
other key parameters we are missing?: Shared installation for both T1_FR_CCIN2P3 and T2_FR_CCIN2P3
have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootdOld
Yes.
<!--/twistyPlugin-->
 

T2_ES_CIEMAT

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_ES_CIEMAT

Revision 182014-10-14 - AndreaSartirana

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 205 to 205
 
<!--/twistyPlugin-->

dpm

Added:
>
>

T2_FR_GRIF_LLR

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_FR_GRIF_LLR
storage technology and version: dpm v1.8.8-4 (on SL 6)
xrootd server version: xrootd v3.3.6-1 (on SL 6)
how many xrootd server are installed? 1 xrootd head node and 19 disk servers
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd and dpm are collocated on the same server: server: 4 cores (Intel E5506), 8 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 10 Gb Ethernet
bandwidth from xrootd head node and outside: 10 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? NB_XTHREADS parameters ans system limits have been re-configured to fit the sugestions in the Tuning wiki.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: yes
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes, implementing changes on the 15/10/2014
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) not applicable
<!--/twistyPlugin-->
 

T2_FR_IPHC

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_FR_IPHC

Revision 172014-10-07 - DietrichLiko

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 225 to 225
 (for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes (the services have been restarted, but not the systems)
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) not applicable
<!--/twistyPlugin-->
Added:
>
>

T2_AT_Vienna

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_AT_Vienna
storage technology and version: dpm v1.8.8-4 (on SL 6)
xrootd server version: xrootd v3.3.6-1 (on SL 6)
how many xrootd server are installed? 1 xrootd head node and 10 disk servers
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd and dpm are collocated on the same server: server: 8 cores (Intex Xeon L5240), 16 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 1 Gb Ethernet on head, most of the data 10 Gb
bandwidth from xrootd head node and outside: Outside 10 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? FTHREADS and STHREADS are set as described on the Dpm/Admin/TuningHints page and we have the following parameters in security/limits.conf: "dpmmgr soft nproc unlimited"
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: yes(
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: We are using the DPM/XRootD plugin
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes (the services have been restarted, but not the systems)
<!--/twistyPlugin-->
 

storm

T2_IT_Pisa

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Revision 162014-10-07 - ValeryMitsyn

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 166 to 166
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) No, movers and JVM heap sizes are tunned as per reuqired based upon production load.
<!--/twistyPlugin-->
Added:
>
>

T1_RU_JINR

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

storage version: dCache 2.6.34
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 1
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd door: 1 VM, 4GB RAM, 3 vcpu; xrootd server: 1 VM, 2GB RAM, 2 vcpu
is there load-balancing and fail over: no
network interfaces of headnode and servers: The dCache pools all have a 10 GE interface.
bandwidth from xrootd head node and outside: Bandwidth for xrootd server and manager is irrelevant, since no data is tunneled through it.
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache, but active transfers are limited to a certain number, right now its 500-1000 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): yes.
<!--/twistyPlugin-->

T2_RU_JINR

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

storage version: dCache 2.6.34
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 1
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd door: 1 VM, 2GB RAM, 2 vcpu; xrootd server: 1 VM, 2GB RAM, 2 vcpu
is there load-balancing and fail over: no
network interfaces of headnode and servers: The dCache pools all have a 2x1GbE trunk.
bandwidth from xrootd head node and outside: Bandwidth for xrootd server and manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache, but active transfers are limited to a certain number, right now its 500 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: yes
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): yes.
<!--/twistyPlugin-->
 

dpm

T2_FR_IPHC

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Revision 152014-10-06 - AntonioPerezCalero

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 105 to 105
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Don't know any recommendations. This TWiki tells us nothing new.
<!--/twistyPlugin-->
Added:
>
>

T1_ES_PIC

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

storage version: dCache 2.6.31 for the dcache xrootd doors
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 2
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd server: 2 VMs, 2 cores, 2 GB of RAM and 20 GB disk space each
is there load-balancing and fail over: yes, both between xrootd local servers and xrootd dcache doors
network interfaces of headnode and servers: The dCache pools all have a 10 GE interface.
bandwidth from xrootd head node and outside: Bandwidth for xrootd manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache. Simultaneous active transfers are currently limited to 500 per dCache pool.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: yes, ATLAS, but with on different disk pools and dcache doors. Interference only possible at the level of network.
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Yes.
<!--/twistyPlugin-->
 

T2_ES_CIEMAT

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_ES_CIEMAT

Revision 142014-10-02 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 223 to 223
 are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system?: Yes
other key parameters we are missing?: Site is using RHEL6.5 as OS on all the nodes. Implemented following kernel tweaks.
Changed:
<
<
1. Disabled transparent Huge Pages: 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag'
>
>
1. Disabled transparent Huge Pages Compaction: 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag'
 2. Changed value of vfs_cache_pressure from 100 to 10: 'echo 10 > /proc/sys/vm/vfs_cache_pressure'
<!--/twistyPlugin-->

Revision 132014-10-02 - ManojJha

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 127 to 127
 
<!--/twistyPlugin-->

T2_CH_CSCH

Added:
>
>
More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

 cms_site_name: T2_CH_CSCS
storage technology and version: dCache 2.6.27
xrootd server version: 3.3.1
Line: 144 to 145
 other key parameters we are missing?: N/A
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) No, movers and JVM heap sizes are tunned as per reuqired based upon production load.
Changed:
<
<
>
>
<!--/twistyPlugin-->
 

dpm

T2_FR_IPHC

Line: 205 to 206
 other key parameters we are missing?:
<!--/twistyPlugin-->

Hadoop/BeSTMan

Added:
>
>

T2_US_Purdue

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_US_Purdue
storage technology and version: Hadoop 2.0: hadoop-2.0.0+545-1.cdh4.1.1.p0.20.osg32.el6.x86_64
xrootd server version: Xrootd 3.3.6: xrootd-3.3.6-1.1.osg32.el6.x86_64
how many xrootd server are installed?: 90
usage of xrootd proxy?: No
usage of xrootd manager (cmsd)?: Yes
head node hardware description: 16-cores, AMD Opteron(tm) Processor 4280, 16 GB RAM
is there load-balancing and fail over?: No
network interfaces of headnode and servers: 10G
bandwidth from xrootd head node and outside: head node is 10G, WAN is 100G
are there limitations in the number of allowed accesses to the system (number of queries)?: Number of open files is limited to 61440
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: No
are you registered in the AAA monitoring system?: Yes
other key parameters we are missing?: Site is using RHEL6.5 as OS on all the nodes. Implemented following kernel tweaks.
1. Disabled transparent Huge Pages: 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag'
2. Changed value of vfs_cache_pressure from 100 to 10: 'echo 10 > /proc/sys/vm/vfs_cache_pressure'
<!--/twistyPlugin-->

 

LStore/BeStMan

Lustre/BeSTMan

Revision 122014-10-01 - JeromePansanel

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 147 to 147
 

dpm

Added:
>
>

T2_FR_IPHC

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_FR_IPHC
storage technology and version: dpm v1.8.8-4 (on SL 6)
xrootd server version: xrootd v3.3.6-1 (on SL 6)
how many xrootd server are installed? 1 xrootd head node and 10 disk servers
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd and dpm are collocated on the same server: server: 8 cores (E5640), 12 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: 10 Gb Ethernet
bandwidth from xrootd head node and outside: 10 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? FTHREADS and STHREADS are set as described on the Dpm/Admin/TuningHints page and we have the following parameters in security/limits.conf: "dpmmgr soft nproc unlimited"
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: We are using the DPM/XRootD plugin
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? yes (the services have been restarted, but not the systems)
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) not applicable
<!--/twistyPlugin-->
 

storm

T2_IT_Pisa

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Revision 112014-10-01 - MiguelGila

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 126 to 126
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) Not really but settings are not so bad. No parameter "cms.dfs" in configuration; java heap space 1 GB; max xrootd movers is 5
<!--/twistyPlugin-->
Added:
>
>

T2_CH_CSCH

cms_site_name: T2_CH_CSCS
storage technology and version: dCache 2.6.27
xrootd server version: 3.3.1
how many xrootd server are installed? 1
usage of xrootd proxy? Yes
usage of xrootd manager (cmsd)? Yes
head node hardware description: dCache: 2x IBM M4, 2x E5-2643 (3.3Ghz), 64GB RAM, 600GB local disk (RAID 1 SAS), xrootd: KVM 2x 2.6 GHz CPU, 12GB RAM
is there load-balancing and fail over? dCache cells are split between the two headnodes, no load-balancing or failover
network interfaces of headnode and servers: Mellanox QDR Infiniband bridged to 20Gb/s uplink
bandwidth from xrootd head node and outside: 10Gb shared vNIC
are there limitations in the number of allowed accesses to the system (number of queries)? Yes limits of 2000 or 400 threads in xrootd
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? Not currently implemented due to bug in dCache which broke symlinks in /pnfs, will be fixed in upcomming downtime by upgrading dCache
are you serving multiple VOs with your xrootd setup: Only CMS, ATLAS is working on this
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: N/A
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) No, movers and JVM heap sizes are tunned as per reuqired based upon production load.
 

dpm

storm

Revision 102014-09-30 - AntonioDelgadoPeris

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 105 to 105
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Don't know any recommendations. This TWiki tells us nothing new.
</>
<!--/twistyPlugin-->
Added:
>
>

T2_ES_CIEMAT

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_ES_CIEMAT
storage technology and version: dcache-2.6.31-1 (on SL 5)
xrootd server version: Federation host: xrootd-3.3.1-1.2.osg.el6.x86_64 (on SL 6) DCache door: native code (dcache-2.6.31-1)
how many xrootd server are installed? 1 federation host. 1 door (planning to increase to 2). But there are xrootd movers on every dCache pool (37 servers currently)
usage of xrootd proxy? No
usage of xrootd manager (cmsd)? Yes
head node hardware description Federation host: Virtual Machine, 2 cores, 8 GB RAM. DCache door (also pool): 16 cores, 12 GB RAM
is there load-balancing and fail over? No (plannig to have when second door is installed)
network interfaces of headnode and servers: Federation host (shared with other VMs): up to 1 x 10 Gbps. Door: 2 x 1 Gbps. Pools: 2/3 x 1 Gbps
bandwidth from xrootd head node and outside: Federation host: 1 Gbps. Door: 2 Gbps. Pools: 2/3 Gbps
are there limitations in the number of allowed accesses to the system (number of queries)? dCache xrootd door with maximum 10 threads (planning to increase this value)
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VO's with your xrootd setup: No
are you registered in the AAA monitoring system? Yes
other key parameters we are missing?: xrootd CMS TFC plugin (xrootd-cmstfc-1.5.1-6.osg.el6.x86_64) on federation host, GSI-authenticated access to dCache xrootd door
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? Not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) Not really but settings are not so bad. No parameter "cms.dfs" in configuration; java heap space 1 GB; max xrootd movers is 5
<!--/twistyPlugin-->
 

dpm

storm

T2_IT_Pisa

Revision 92014-09-30 - MassimoBiasotto

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 47 to 47
 (for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4 GB
</>
<!--/twistyPlugin-->
Added:
>
>

T2_IT_LEGNARO

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_IT_LEGNARO
storage technology and version: dCache 2.6.33 (ns=Chimera)
xrootd server version: xrootd-3.3.1-1.2.osg.el5 on SL5
how many xrootd server are installed? one
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description : Virtual Machine configured with 4 vcores and 4 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: headnode 1Gb/s, dcache pool servers 10Gb/s
bandwidth from xrootd head node and outside: 1 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? max 1000 thread on xrootd door, the limit of active transfers on the pools varies (200-500 per server)
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? symlink
are you serving multiple VOs with your xrootd setup: CMS only
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space already set to 2 GB
<!--/twistyPlugin-->
 

T3_CH_PSI

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T3_CH_PSI

Revision 82014-09-29 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 48 to 48
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4 GB
<!--/twistyPlugin-->

T3_CH_PSI

Changed:
<
<
More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
cms_site_name: T3_CH_PSI
storage technology and version: dCache 2.6.33 (ns=Chimera)
xrootd server version: xrootd-3.3.6-1.slc6.x86_64
how many xrootd server are installed? one
usage of xrootd proxy? no, we use https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootd
usage of xrootd manager (cmsd)? yes
head node hardware description : a simple VMWare VM, 8GB RAM, 4 vcores
is there load-balancing and fail over? no, but the VM will be quickly and automatically restarted if the host where is running will fail
network interfaces of headnode and servers: headnode VM uses one 100Mbit/s ; the 11 dCache pool servers have 4*1Gbit/s in LACP trunk while 2 dCache pool servers have a 10Gbit/s.
bandwidth from xrootd head node and outside: 100Mbit/s
are there limitations in the number of allowed accesses to the system (number of queries)? we allow two parallel xrootd requests per single dCache pool server, the other xrootd requests get queued waiting for one of those two slots.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? N/A
are you serving multiple VOs with your xrootd setup: no, just CMS
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: if that matters the dCache xrootd door authenticates and authorizes each xrootd request by xrootdPlugins=gplazma:gsi,authz:cms-tfc ; no anonymous access 
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A 
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes
<!--/twistyPlugin-->
>
>
More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T3_CH_PSI
storage technology and version: dCache 2.6.33 (ns=Chimera)
xrootd server version: xrootd-3.3.6-1.slc6.x86_64
how many xrootd server are installed? one
usage of xrootd proxy? no, we use https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootd
usage of xrootd manager (cmsd)? yes
head node hardware description : a simple VMWare VM, 8GB RAM, 4 vcores
is there load-balancing and fail over? no, but the VM will be quickly and automatically restarted if the host where is running will fail
network interfaces of headnode and servers: headnode VM uses one 100Mbit/s ; the 11 dCache pool servers have 4*1Gbit/s in LACP trunk while 2 dCache pool servers have a 10Gbit/s.
bandwidth from xrootd head node and outside: 100Mbit/s
are there limitations in the number of allowed accesses to the system (number of queries)? we allow two parallel xrootd requests per single dCache pool server, the other xrootd requests get queued waiting for one of those two slots.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? N/A
are you serving multiple VOs with your xrootd setup: no, just CMS
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: if that matters the dCache xrootd door authenticates and authorizes each xrootd request by xrootdPlugins=gplazma:gsi,authz:cms-tfc ; no anonymous access
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes
<!--/twistyPlugin-->
 

T1_DE_KIT

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

storage version: dCache 2.6.34

Revision 72014-09-26 - PreslavKonstantinov

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 67 to 67
 (for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes
<!--/twistyPlugin-->
Added:
>
>

T1_DE_KIT

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

storage version: dCache 2.6.34
xrootd server version: xrootd-3.3.6-1.1.osg31.el6.x86_64
how many xrootd server are installed: 1
usage of xrootd proxy: no
usage of xrootd manager (cmsd): yes
head node hardware description: xrootd server: VMWAre VM, 2GB RAM, 1 vcpu
is there load-balancing and fail over: no
network interfaces of headnode and servers: The dCache pools all have a 10 GE interface.
bandwidth from xrootd head node and outside: Bandwidth for xrootd manager is irrelevant, since no data is tunneled through it (not a proxy).
are there limitations in the number of allowed accesses to the system (number of queries)? There are no restrictions for contacting either xrootd manager or dCache, but active transfers are limited to a certain number (right now its 25 per dCache pool), any more are queued.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?: TFC
are you serving multiple VOs with your xrootd setup: no, yes (potentially)!
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?): Don't know any recommendations. This TWiki tells us nothing new.
<!--/twistyPlugin-->
 

dpm

storm

T2_IT_Pisa

Revision 62014-09-25 - IbanCabrillo

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 87 to 87
 are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
</>
<!--/twistyPlugin-->
Added:
>
>

T2_ES_IFCA

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_ES_IFCA
storage technology and version: storm 1.11.3 (on sl6)/GPFS 3.5
xrootd server version: xrootd.3.3.6-1 on sl6
how many xrootd server are installed? 8
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? no
head node hardware description no head node
is there load-balancing and fail over? yes
network interfaces of headnode and servers: each server has shared 10 Gbit/s connection
bandwidth from xrootd head node and outside: each server has shared 10 Gbit/s connection
are there limitations in the number of allowed accesses to the system (number of queries)? thread number set to 1024 per server
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
<!--/twistyPlugin-->
 

Hadoop/BeSTMan

LStore/BeStMan

Lustre/BeSTMan

Revision 52014-09-24 - TommasoBoccali

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 69 to 69
 </>
<!--/twistyPlugin-->

dpm

storm

Added:
>
>

T2_IT_Pisa

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_IT_Pisa
storage technology and version: storm 1.11.4 (on sl6)
xrootd server version: xrootd.3.3.4-1 on sl6
how many xrootd server are installed? 2
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? no
head node hardware description no head node
is there load-balancing and fail over? yes
network interfaces of headnode and servers: each server has a 10 Gbit/s connection
bandwidth from xrootd head node and outside: each server has a 10 Gbit/s connection
are there limitations in the number of allowed accesses to the system (number of queries)? thread number set to 4096 per server
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?:
<!--/twistyPlugin-->
 

Hadoop/BeSTMan

LStore/BeStMan

Lustre/BeSTMan

Revision 42014-09-24 - AndreasNowack

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Changed:
<
<
Please copy the following questions and fill the answers under the label corrisponding to your storage system:
>
>
Please copy the following questions and fill the answers under the label corresponding to your storage system:
 
cms_site_name:
Line: 27 to 27
 

castor

dcache

Added:
>
>

T2_DE_RWTH

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

cms_site_name: T2_DE_RWTH
storage technology and version: dcache 2.6.28-1 (on SL 6)
xrootd server version: xrootd-3.3.1-1.2.osg.el5 (on SL 5, OS upgrade to SL 6 planed for the near future)
how many xrootd server are installed? 1
usage of xrootd proxy? no
usage of xrootd manager (cmsd)? yes
head node hardware description xrootd server: 8 cores, 16 GB RAM; dcache head node: 8 cores, 16 GB RAM
is there load-balancing and fail over? no
network interfaces of headnode and servers: port channels with 2*1 Gb/s
bandwidth from xrootd head node and outside: port channel with 2*1 Gb/s
are there limitations in the number of allowed accesses to the system (number of queries)? dCache xrootd door with maximum 1000 threads, 100 or 500 parallel active transfers per dCache pool depending on the pool
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? TFC
are you serving multiple VOs with your xrootd setup: no
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: dates of tests are not indicated in the plots (if the dates of the attachments correspond to the dates of tests, the measurements were done during a phase of massive reorganisation of the whole dCache system), xrootd CMS TFC plugin (xrootd-cmstfc-1.5.1-6.osg.el5) on xrootd server, authenticated access to dCache xrootd door (xrootdPlugins=gplazma:gsi,authz:cms-tfc)
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? not applicable
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes: no parameter "cms.dfs" in configuration, java heap space 4 GB
<!--/twistyPlugin-->
 

T3_CH_PSI

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
cms_site_name: T3_CH_PSI

Revision 32014-09-23 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Line: 24 to 24
 (for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?)


Added:
>
>
 

castor

dcache

Added:
>
>

T3_CH_PSI

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
cms_site_name: T3_CH_PSI
storage technology and version: dCache 2.6.33 (ns=Chimera)
xrootd server version: xrootd-3.3.6-1.slc6.x86_64
how many xrootd server are installed? one
usage of xrootd proxy? no, we use https://twiki.cern.ch/twiki/bin/view/Main/DcacheXrootd
usage of xrootd manager (cmsd)? yes
head node hardware description : a simple VMWare VM, 8GB RAM, 4 vcores
is there load-balancing and fail over? no, but the VM will be quickly and automatically restarted if the host where is running will fail
network interfaces of headnode and servers: headnode VM uses one 100Mbit/s ; the 11 dCache pool servers have 4*1Gbit/s in LACP trunk while 2 dCache pool servers have a 10Gbit/s.
bandwidth from xrootd head node and outside: 100Mbit/s
are there limitations in the number of allowed accesses to the system (number of queries)? we allow two parallel xrootd requests per single dCache pool server, the other xrootd requests get queued waiting for one of those two slots.
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink? N/A
are you serving multiple VOs with your xrootd setup: no, just CMS
are you registered in the AAA monitoring system? yes
other key parameters we are missing?: if that matters the dCache xrootd door authenticates and authorizes each xrootd request by xrootdPlugins=gplazma:gsi,authz:cms-tfc ; no anonymous access 
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints? N/A 
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?) yes
<!--/twistyPlugin-->
 

dpm

storm

Hadoop/BeSTMan

Revision 22014-09-22 - FedericaFanzago

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Changed:
<
<
Please copy the following questions and fill the answers under your storage technology:
>
>
Please copy the following questions and fill the answers under the label corrisponding to your storage system:
 
cms_site_name:

Revision 12014-09-22 - FedericaFanzago

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="CmsXrootdOpenFileTests"

Configuration and hardware of sites belonging to data federation

Please copy the following questions and fill the answers under your storage technology:


cms_site_name:
storage technology and version:
xrootd server version:
how many xrootd server are installed?
usage of xrootd proxy?
usage of xrootd manager (cmsd)?
head node hardware description
is there load-balancing and fail over?
network interfaces of headnode and servers:
bandwidth from xrootd head node and outside:
are there limitations in the number of allowed accesses to the system (number of queries)?
is the access to /store/test/xrootd/$SITENAME dir done using TFC or symlink?
are you serving multiple VOs with your xrootd setup:
are you registered in the AAA monitoring system?
other key parameters we are missing?:
(for dpm sites) have you already followed the configuration suggested in https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/TuningHints?
(for dcache sites) have you already followed the configuration suggested in https://twiki.cern.ch/twiki/bin/view/Main/CmsXrootdOpenFileTests#To_improve_performance_of_dcache? or other twiki (which?)


castor

dcache

dpm

storm

Hadoop/BeSTMan

LStore/BeStMan

Lustre/BeSTMan

-- FedericaFanzago - 22 Sep 2014

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback