Adding an extra LCG CE to your site

If you wish to have multiple lcg-CEs, the following steps are necessary (tested on glite 3.1). In the following instructions replace $CE_HOST with the second CE's hostname:

On the new CE node:
  • Create a second site-info.def, exactly the same as the one used typically for your site, except for the CE_HOST variable, which should be set to the new CE's hostname.
  • Configure the new CE as a Computing Element only, ie. no Torque Server and no Site BDII (you can only have one of these):
   yaim -c -s site-info-2.def -n lcg-CE -n TORQUE_utils

The CE's basic functionality should now be ready, and should be tested with a simple job submitted directly to the CE:

globus-job-run $CE_HOST /bin/hostname

On the Torque Server node:
  • To configure your site's Torque_Server to accept job submission from the CE, append the $CE_HOST at the end of /etc/hosts.equiv file:
   echo "$CE_HOST" >> /etc/hosts.equiv
  • Also you need to append the new $CE_HOST to ADMINHOSTS line in maui.cfg so that the vomaxjobs-maui command works from the new Computing Element.

On the Worker Nodes:
  • Append $CE_HOST at the NODES line, in file /opt/edg/etc/edg-pbs-knownhosts.conf.
  • Run the command /opt/edg/sbin/edg-pbs-knownhosts so that the file /etc/ssh/ssh_known_hosts is reproduced. This allows passwordless ssh login from the new CE to the WNs.

Now the new CE should be able to submit jobs to the queues. You should submit a simple job to test the functionality:

globus-job-run $CE_HOST:2119/jobmanager-lcgpbs /bin/hostname

On the Site Bdii node:
  • To allow the new CE to be published by the information system you should create a new line describing the BDII LDAP URL, at file /opt/glite/etc/gip/site-urls.conf. For example:
   echo "CE2     ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid" >> /opt/glite/etc/gip/site-urls.conf

After a while the new CE should be published correctly, check it by running the lcg-infosites or the glite-wms-job-list-match command.

NFS shared gridmapdir:
On both CEs, the directory /etc/grid-security/gridmapdir contains the correspondence between true Grid users and local pool accounts. Since the two CEs share the same WNs, having seperate gridmapdir on each CE can lead to security problems and other issues. The proposed solution is to share via NFS the gridmapdir on both CEs. The exact steps are:
  • On the first CE (NFS server):
   # echo "/etc/grid-security/gridmapdir $CE_HOST(rw,no_root_squash,sync)" >> /etc/exports
   # chkconfig nfs on
   # service nfs restart
  • On the other CEs (NFS client):
   # echo "$FIRST_CE_HOST:/etc/grid-security/gridmapdir /etc/grid-security/gridmapdir nfs defaults 0 0" >> /etc/fstab
   # service netfs restart

On the MON box:
To allow the apel-pbs-log-parser on the new CE to write BLAH records in MON's mysql, access should be granted. To grant access run the following command on the MON box:
# mysql --pass="$MYSQL_PASSWORD" --exec "grant all on accounting.* to 'accounting'@'${CE_HOST}' identified by '${APEL_DB_PASSWORD}'"

-- DimitriosApostolou - 04 Dec 2007

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2008-03-27 - DimitriosApostolou
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EGEE All webs login

This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright & by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Ask a support question or Send feedback