LCG Production Services -
LCG Grid Deployment
Installation of a gLite WMS or LCG RB node
Request of a new host certificate
The general procedure can be found
here (GD specific) and
here (FIO specific). To be short, you have to execute the following command line (the certificate is automatically uploaded to SINDES):
/usr/bin/host-certificate-manager --from=it-dep-gd-gmod.cern.ch --username=gdadmin HOSTNAME *(without .cern.ch !!!)*
Register the node to the myproxy server
Send an email to
CCService.ManagerOnDuty@cernNOSPAMPLEASE.ch to request the add of the hosts to the myproxy server. You must give the DN of the machine.
Configuration of the 3 RAID-1 disks on a mid-range server
- Check that the 3 RAID-1 disks are configured:
[root@rb113 root]# tw_cli /c0 show
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-1 OK - - 149.001 ON OFF OFF
u1 RAID-1 OK - - 232.82 ON OFF OFF
u2 RAID-1 OK - - 232.82 ON OFF OFF
u3 RAID-1 OK - - 232.82 ON OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 153.38 GB 321672960 VDK91GTE0ABPKR
p1 OK u0 153.38 GB 321672960 VDK91GTE0B7K9R
p2 OK u1 232.88 GB 488397168 VDS41DT4F5LDMJ
p3 OK u1 232.88 GB 488397168 VDS41DT4F4M22J
p4 OK u2 232.88 GB 488397168 VDS41DT4F5K9GJ
p5 OK u2 232.88 GB 488397168 VDS41DT4F5JNDJ
p6 OK u3 232.88 GB 488397168 VDS41DT4F571KJ
p7 OK u3 232.88 GB 488397168 VDS41DT4F5LDZJ
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
---------------------------------------------------------------------------
bbu On Yes OK OK OK 0 xx-xxx-xxxx
- Creation of the RAID-1 partitions
tw_cli /c0 add type=raid1 disk=2:3
tw_cli /c0 add type=raid1 disk=4:5
tw_cli /c0 add type=raid1 disk=6:7
- Deletion of the RAID-1 partitions
tw_cli /c0/u1 del
tw_cli /c0/u2 del
tw_cli /c0/u3 del
- Copy script file /usr/local/sbin/fileserver-datadisk-setup.sh in /root, edit it and change the value 262144 to 8192 (it corresponds to the bytes-per-inode value). The former value was too high, and value 4092 is too small.
- Get this script fileserver-datadisk-setup.sh and execute it to the target node. Check first that the bytes-per-inode value has been changed from 262144 to 8192.
[root@rb113 root]# ./fileserver-datadisk-setup.sh --prefix /data --fs ext3
Searching for accessible SCSI devices:
/dev/sdb
/dev/sdc
/dev/sdd
Scanning for RAID superblocks on SCSI devices:
/dev/sdb: none
/dev/sdc: none
/dev/sdd: none
Setting up /dev/sdb:
fdisk /dev/sdb > /tmp/fdisk.sdb.log 2>&1
dd if=/dev/zero of=/dev/sdb1 bs=1M count=8 2>/dev/null
mkfs.ext2 -m 0 -i 8192 /dev/sdb1 >/tmp/mkfs.sdb1.log 2>&1
# Changing filesystem label: '' -> 'data01'
tune2fs -L data01 /dev/sdb1 > /tmp/tune2fs.sdb1.log
tune2fs -j /dev/sdb1 >> /tmp/tune2fs.sdb1.log
# mount point exists
print 'LABEL=data01 /data01 ext3 defaults 1 3' >> /etc/fstab
mount -t ext3 /dev/sdb1 /data01
Setting up /dev/sdc:
fdisk /dev/sdc > /tmp/fdisk.sdc.log 2>&1
dd if=/dev/zero of=/dev/sdc1 bs=1M count=8 2>/dev/null
mkfs.ext2 -m 0 -i 8192 /dev/sdc1 >/tmp/mkfs.sdc1.log 2>&1
# Changing filesystem label: '' -> 'data02'
tune2fs -L data02 /dev/sdc1 > /tmp/tune2fs.sdc1.log
tune2fs -j /dev/sdc1 >> /tmp/tune2fs.sdc1.log
# mount point exists
print 'LABEL=data02 /data02 ext3 defaults 1 3' >> /etc/fstab
mount -t ext3 /dev/sdc1 /data02
Setting up /dev/sdd:
fdisk /dev/sdd > /tmp/fdisk.sdd.log 2>&1
dd if=/dev/zero of=/dev/sdd1 bs=1M count=8 2>/dev/null
mkfs.ext2 -m 0 -i 8192 /dev/sdd1 >/tmp/mkfs.sdd1.log 2>&1
# Changing filesystem label: '' -> 'data03'
tune2fs -L data03 /dev/sdd1 > /tmp/tune2fs.sdd1.log
tune2fs -j /dev/sdd1 >> /tmp/tune2fs.sdd1.log
# mount point exists
print 'LABEL=data03 /data03 ext3 defaults 1 3' >> /etc/fstab
mount -t ext3 /dev/sdd1 /data03
LCG RB specific things to do
- Desactivate the APT repositories (especially the cern.list repository to avoid upgrade of the OS).
- Modify the /etc/sysconfig/apt-autoupdate to disable the auto-update and to send all the mail to the RB service manager mailing list.
- Install the glite-yaim package (if not already installed) and edit the site-info.def file.
- Create /home/edguser directory and change its ownership to edguser:edguser.
- Execute the install_node and configure_node scripts (yaim).
- Run the RB-post-install script used to migrate the middleware log files, sandbox and MySQL database on the 3 RAID-1 partitions.
- Modify the /etc/cron.daily/slocate.cron cron job in order to exclude directories /data01, data02 and data03.
- Install the RB/WMS monitoring tool (see instructions here).
- Put the machine in GOCDB.
- Start the lcg-fmon-job-status service (which is not start by default).
- Copy from one of our LCG RBs the cron job /etc/cron.hourly/lcg-mon-job-status.cron.
- Publish the information in the Information System (cf site bdii bdii103 and bdii104 nodes at Cern).
- Register the node in the GD Firewall database.
- Register the node on the MyProxy nodes (see instructions here).
- Check that you can submit a job to this node.
- MySQL database to configure to allow connections from the Real Time Monitor tool (see mail from Gidon Moont).
- Put the machine in production (check that no alarms are present via lemon-host-check).
gLite WMS specific things to do
- Run the WMS-post-install script used to migrate the middleware log files, sandbox and MySQL database on the 3 RAID-1 partitions
- Modify the /etc/cron.daily/slocate.cron cron job in order to exclude directories /data01, data02 and data03.
- Install the RB/WMS monitoring tool (see instructions here).
- Register the node in the GD Firewall database.
- Check that you can submit a job to this node.
- Put the machine in production (check that no alarms are present via lemon-host-check).
- Register the node on the MyProxy nodes (see instructions here).
Settings for the gLite WMS 3.1 node
Take a look at the following wiki page:
https://twiki.cern.ch/twiki/bin/view/LCG/GLiteWMSLog (section
rb125 for example).