Configuration
Hardware
- 16 worker nodes
- Dual Pentium IV Xeon 2.80GHz with 2GB RAM, onboard Intel e1000 interface with additonal Intel e1000 NIC in a PCI slot.
- 4 disk servers
- Dual Pentium IV Xeon 2.80GHz with 2GB RAM, dual onboard Intel e1000 interfaces, each connected to 2 Infortrend Eonstore A16U-G1A3 arrays via an Adaptec 39320 Ultra320 SCSI adaptor. Each array was configured with 16 x WD2500 disk drives, as a RAID 5 with a hot spare. Two logical drives of 1.5TB were created but only one used, in order to minimise SCSI traffic.
Software
The worker nodes are running Scientific Linux 3.0.4.
The disk servers are running a tuned Fedora Legacy 7.3 kernel.
Versions of dCache software installed:
- pnfs-3.1.10-12
- d-cache-core-1.5.2-33
- d-cache-opt-1.5.3-15
The 4 disk servers were setup with a 1.5TB dCache pool on each
EonStore array. The ext2 filesystem was used with the noatime option set. The full tag limit for the aic79xx device driver was set along with read streaming. The arrays were configured for sequential access. The 16 worker nodes were setup as a dcache head node, a postgresql database server, an srm door node and 13 gridftp door nodes.
Networking
All nodes were connected to the UKLight network through a Summit 7i switch which was connected to a Nortel Passport router, which in turn was connected to the UKLight adaptor.
Problems
- Our inital network configuration had the disk servers not connected to the Summit 7i and had the data flow from the gridftp doors to the pools running over our production network - a stack of Nortel BayStack 5510-48T switches, however due to problems with dual-homing and that the two networks were not configured to route to each other this had to be abandoned.
- Network performance was poor.
--
JamesCasey - 17 Jun 2005
Topic revision: r1 - 2005-06-17
- unknown