TWiki
>
EMI Web
>
EmiProjectStructure
>
EMT
>
GenericInstallationConfigurationEMI3
(revision 11) (raw view)
Edit
Attach
PDF
<div style="float:right; text-align:right;"> %TABLE{sort="off" tableframe="all" tableborder="2" databg="#FFF8C6" dataalign="left" cellspacing="0"}% |*Quick Links*| |[[%ATTACHURL%/EMI_GenericInstallationConfigurationEMI3_v_3_0_0_1.pdf][v. 3.0.0-1]]| </div> ---+!! Generic Installation & Configuration for EMI 3 %TOC% This document is addressed to Site Administrators responsible for middleware installation and configuration. It is a generic guide to manual installation and configuration for EMI products. The list of supported products and services can be found in the EMI 3 web pages.: * [[http://www.eu-emi.eu/montebianco-products][EMI 3 (Monte Bianco) products list]] When installing a particular product please also have a look at the specific release page to get specific installation & configuration information. ---+ Installing the Operating System All EMI 3 components are fully supported on the *SL5/x86_64* & *SL6/x86_64* platforms with EPEL as repository for external components. Full platform support means the component is distributed from the EMI repository using certified source and binary packages according to the format specification of the platform. A subset of services, mainly clients and libraries part of theUserInterface, is also available for Debian 6 64bit ---++ Scientific Linux 5 & 6 For more information on Scientific Linux please check: [[http://www.scientificlinux.org][http://www.scientificlinux.org]] All the information to install this operating system can be found at [[https://www.scientificlinux.org/download][https://www.scientificlinux.org/download]] Example of *sl5.repo* file: <verbatim> [core] name=name=SL 5 base baseurl=http://linuxsoft.cern.ch/scientific/5x/$basearch/SL http://ftp.scientificlinux.org/linux/scientific/5x/$basearch/SL http://ftp1.scientificlinux.org/linux/scientific/5x/$basearch/SL http://ftp2.scientificlinux.org/linux/scientific/5x/$basearch/SL protect=0 </verbatim> Example of *sl6.repo* file: <verbatim> [core] name=name=SL 6 base baseurl=http://linuxsoft.cern.ch/scientific/6x/$basearch/SL http://ftp.scientificlinux.org/linux/scientific/6x/$basearch/SL protect=0 </verbatim> ---++ Debian 6 For more information on Debian please check [[http://www.debian.org/][http://www.debian.org/]]. All the information to install this operating system can be found at [[http://www.debian.org/releases/stable/installmanual][http://www.debian.org/releases/stable/installmanual]] Example of *deb.list* file: <verbatim> deb http://ftp.it.debian.org/debian/ squeeze main contrib non-free deb-src http://ftp.it.debian.org/debian/ squeeze main contrib non-free deb http://security.debian.org/ squeeze/updates main contrib deb-src http://security.debian.org/ squeeze/updates main contrib </verbatim> ---++ Node synchronization, NTP installation and configuration A general requirement is that the nodes are synchronized. This requirement may be fulfilled in several ways. If your nodes run under AFS they are most likely already synchronized. Otherwise, you can use the NTP protocol with a time server. Instructions and examples for a NTP client configuration are provided in this section. If you are not planning to use a time server on your machine you can just skip this section. Use the latest ntp version available for your system. If you are using APT, an apt-get install ntp will do the work. * Configure the file /etc/ntp.conf by adding the lines dealing with your time server configuration such as, for instance: <verbatim> restrict <time_server_IP_address> mask 255.255.255.255 nomodify notrap noquery server <time_server_name></verbatim> </verbatim> Additional time servers can be added for better performance results. For each server, the hostname and IP address are required. Then, for each time-server you are using, add a couple of lines similar to the ones shown above into the file /etc/ntp.conf. * Edit the file /etc/ntp/step-tickers adding a list of your time server(s) hostname(s), as in the following example: <verbatim> 137.138.16.69 137.138.17.69 </verbatim> * If you are running a kernel firewall, you will have to allow inbound communication on the NTP port. If you are using iptables, you can add the following to /etc/sysconfig/iptables <verbatim> -A INPUT -s NTP-serverIP-1 -p udp --dport 123 -j ACCEPT -A INPUT -s NTP-serverIP-2 -p udp --dport 123 -j ACCEPT </verbatim> Remember that, in the provided examples, rules are parsed in order, so ensure that there are no matching REJECT lines preceding those that you add. You can then reload the firewall <verbatim> # /etc/init.d/iptables restart </verbatim> * Activate the ntpd service with the following commands: <verbatim> # ntpdate <your ntp server name> # service ntpd start # chkconfig ntpd on </verbatim> * You can check ntpd's status by running the following command <verbatim> # ntpq -p </verbatim> ---++ Cron and logrotate Many middleware components rely on the presence of cron (including support for /etc/cron.* directories) and logrotate. You should make sure these utils are available on your system. ---+ Host Certificates All nodes except UI, WN and BDII require the host certificate/key files to be installed. Contact your Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already. </verbatim> Once you have obtained a valid certificate: * _hostcert.pem_ - containing the machine public key * _hostkey.pem_ - containing the machine private key </verbatim> make sure to place the two files in the target node into the _/etc/grid-security_ directory and check the access right for hostkey.pem is only readable by root and that the public key, hostcert.pem, is readable by everybody. ---+ Installing the Middleware For SL5 & SL6 the *YUM* package manager is considered the to be the default installation tool. FOr Debian , *apt* ---++ Repositories For a successful installation, you will need to configure your package manager to reference a number of repositories (in addition to your OS); ---+++ The Certification Authority repository All the details on how to install the CAs can be found in EGI IGTF release pages ([[https://wiki.egi.eu/wiki/EGI_IGTF_Release][https://wiki.egi.eu/wiki/EGI_IGTF_Release]]). It contain information about how to configure YUM & APT managers for downloading and installing the trust anchors ("Certification Authorities" or "CAs") that all sites should install. *NOTE*: BDII site and top services do not need, for the moment,the installation of the CAs. ---+++ The EPEL repository If not present by default on your nodes, you should enable the EPEL repository ([[https://fedoraproject.org/wiki/EPEL][https://fedoraproject.org/wiki/EPEL]]) EPEL has an 'epel-release' package that includes gpg keys for package signing and repository information. Installing the latest version of epel-release package available on EPEL5 and EPEL6 repositories like: * [[http://download.fedoraproject.org/pub/epel/5/x86_64/][http://download.fedoraproject.org/pub/epel/5/x86_64/]], or * [[http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/epel/6/x86_64/][http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/epel/6/x86_64/]] should allow you to use normal tools, such as yum, to install packages and their dependencies. By default the stable EPEL repo is enabled. Example of *epel.repo* file: <verbatim> [extras] name=epel mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=$basearch protect=0 </verbatim> or <verbatim> [extras] name=epel mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-6&arch=$basearch protect=0 </verbatim> ---+++ The middleware (EMI) repositories All EMI products are distributed from a *single repository* ([[http://emisoft.web.cern.ch/emisoft/][http://emisoft.web.cern.ch/emisoft]]) having the following structure: * EMI-production (stable), *EMI/{1,2,3}*: * stable and signed, well tested software components, recommended to be installed on production-sites; * *deployment/{1,2,3}*: * signed packages that will become part of the next stable distribution; passed the certification and validation phase and are available for technical-previews * *testing/{1,2,3}*: * unsigned packages that will become part of the next stable distribution; in the certification stage, available for technical preview The packages are signed with the EMI gpg key, that can be downloaded from [[http://emisoft.web.cern.ch/emisoft/dist/EMI/2/RPM-GPG-KEY-emi][http://emisoft.web.cern.ch/emisoft/dist/EMI/3/RPM-GPG-KEY-emi]]. Please import the key *BEFORE* starting! The fingerprint of the key is: <verbatim> pub 1024D/DF9E12EF 2011-05-04 Key fingerprint = AC82 01B1 DD50 6F4D 649E DFFC 27B3 331E DF9E 12EF uid Doina Cristina Aiftimiei (EMI Release Manager) <aiftim@pd.infn.it> sub 2048g/C1E57858 2011-05-04 </verbatim> * for SL5/SL6 save the key under _/etc/pki/rpm-gpg/_ <verbatim> # rpm --import http://emisoft.web.cern.ch/emisoft/dist/EMI/3/RPM-GPG-KEY-emi </verbatim> * for Debian: <verbatim> # wget -q -O - http://emisoft.web.cern.ch/emisoft/dist/EMI/3/RPM-GPG-KEY-emi | sudo apt-key add - </verbatim> ---++++ Giving EMI repositories precedence over EPEL It is *strongly recommended* that EMI repositories take precedence over EPEL when installing and upgrading packages. For manual configuration: * you must install the *yum-priorities* plugin and ensure that its configuration file, =/etc/yum/pluginconf.d/priorities.conf= is as follows: <verbatim> [main] enabled = 1 check_obsoletes = 1 </verbatim> For automatic configuration: * we strongly recommend the use of *emi-release* package. Please follow the instructions given bellow on what version of the package, how to get it and install according to your deployment scenario (upgrade or fresh instalaltion) ---+++ Configuring the use of EMI 3 repositories * EMI 3 production repositories are available at: * [[http://emisoft.web.cern.ch/emisoft/dist/EMI/3/][http://emisoft.web.cern.ch/emisoft/dist/EMI/3/]] * YUM & APT configuration files are available at: * SL5 - [[http://emisoft.web.cern.ch/emisoft/dist/EMI/3/repos/sl5/][http://emisoft.web.cern.ch/emisoft/dist/EMI/3/repos/sl5/]] * SL6 - [[http://emisoft.web.cern.ch/emisoft/dist/EMI/3/repos/sl6/][http://emisoft.web.cern.ch/emisoft/dist/EMI/3/repos/sl6/]] * Debian6 - [[http://emisoft.web.cern.ch/emisoft/dist/EMI/3/repos/debian/][http://emisoft.web.cern.ch/emisoft/dist/EMI/3/repos/debian/]] * update EMI repositories on a node with *EMI 1 middleware to EMI 3* (SL5/x86_64): * remove first the emi-release package installed on your node:<BR/> <verbatim># rpm -e emi-release</verbatim> * install the EMI 3 emi-release package:<BR/> <verbatim># wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl5/x86_64/base/emi-release-3.0.0-2.el5.noarch.rpm # yum localinstall emi-release-3.0.0-2.el5.noarch.rpm (*)</verbatim> * update EMI repositories on a node with *EMI 2 middleware to EMI 3* (SL5/x86_64):<BR/> <verbatim># rpm -Uvh http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl5/x86_64/base/emi-release-3.0.0-2.el5.noarch.rpm or # wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl5/x86_64/base/emi-release-3.0.0-2.el5.noarch.rpm # yum localupdate emi-release-3.0.0-2.el5.noarch.rpm (*)</verbatim> * install EMI 3 repositories on a fresh node, without EMI middleware: * SL5/x86_64:<BR/> <verbatim># wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl5/x86_64/base/emi-release-3.0.0-2.el5.noarch.rpm # yum localinstall emi-release-3.0.0-2.el5.noarch.rpm (*)</verbatim> * SL6/x86_64:<BR/> <verbatim># wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl6/x86_64/base/emi-release-3.0.0-2.el6.noarch.rpm # yum localinstall emi-release-3.0.0-2.el6.noarch.rpm (*)</verbatim> * Debian:<BR/> <verbatim># wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/debian/dists/squeeze/main/binary-amd64/emi-release_3.0.0-2.deb6.1_all.deb # dpkg -i emi-release_3.0.0-2.deb6.1_all.deb</verbatim> (*) - please add the option "--nogpgcheck" if you didn't download first the key. These packages will install required dependencies, the EMI public key and ensures the precedence of EMI repositories over EPEL and Debian. ---++ Important note on automatic updates Several site use auto update mechanism. Sometimes middleware updates require non-trivial configuration changes or a reconfiguration of the service. This could involve service restarts, new configuration files, etc, which makes it difficult to ensure that automatic updates will not break a service. Thus *WE STRONGLY RECOMMEND NOT TO USE AUTOMATIC UPDATE PROCEDURE OF ANY KIND* on the EMI middleware repositories (you can keep it turned on for the OS). You should read the update information provides by each service and do the upgrade manually when an update has been released! ---++ Installations You need to have enabled only the above repositories (Operating System, EPEL, Certification Authority, EMI). Example of a general installation of a product / service: * SL5/SL6: <verbatim> # yum update # yum install ca-policy-egi-core # yum install <meta-package/package name> </verbatim> * Debian6: <verbatim> # apt-get update # apt-get install ca-policy-egi-core # apt-get install <meta-package/package name> </verbatim> *NOTE*: it happened that on other operating systems than SL5/x86_64, as for example CentOS, for certain node-types you have to install first the jdk (SunJdk) package. Please refer to your Operating System documentation to learn how to do this. The table below lists the available EMI's meta-packages and packages: | *Node Type /<br> Product Name * | *meta-package name* || *Comments* | |^| *SL5/SL6* | *Debian* |^| | AMGA_postgresql| emi-amga-postgresql | - | | | APEL publisher | emi-apel | - | | | ARC-CE | nordugrid-arc-compute-element | - | | | ARC core | nordugrid-arc <br> nordugrid-arc-doc <br> nordugrid-arc-ca-utils <br> nordugrid-arc-debuginfo <br> nordugrid-arc-devel <br> nordugrid-arc-doxygen <br> nordugrid-arc-hed <br> nordugrid-arc-java <br> nordugrid-arc-python <br> nordugrid-arc-python26 <br> nordugrid-arc-plugins-needed <br> nordugrid-arc-plugins-globus | - | | | ARC Clients | nordugrid-arc-client-tools | - | | | ARC gridftp | nordugrid-arc-gridftpd | - | | | ARC !InfoSys | nordugrid-arc-information-index | - | | | ARGUS | emi-argus | emi-argus | | | BDII_site | emi-bdii-site | - | | | BDII_top | emi-bdii-top | - | | | CANL | canl-c <br> canl-c-debuginfo <br> canl-c-devel <br> canl-c-examples <br> canl-java <br> canl-java-javadoc | canl-c-dbg <br> libcanl-c-dev <br> libcanl-c-examples <br> libcanl-c2 <br> libcanl-java <br> libcanl-java-doc | Common !AuthenticatioN Library - set of libraries | | CLUSTER | emi-cluster | - | | | CREAM | emi-cream-ce | - | | | CREAM LSF module | emi-lsf-utils | - | | | CREAM TORQUE module | emi-torque-utils | - | | | dCache | dcache-server | - | | | DPM mysql | emi-dpm_mysql | - | | | DPM disk | emi-dpm_disk | - | | | EMIR | server: emi-emir <br> client: emird | - | | | FTS oracle | emi-fts_oracle, emi-fta_oracle | - | | | GLEXEC_wn | glexec-wn | - | yaim is no longer installed with metapackage: install yaim-glexec-wn separately | | LB | emi-lb | - | | | LFC mysql | emi-lfc_mysql | - | | | LFC oracle | emi-lfc_oracle | - | | | MPI_utils | emi-mpi | - | | | Nagios | emi-nagios | - | | | Pseudonimity | pseudonymity-server <br> pseudonymity-ui | - | | | PX (!MyProxy) | emi-px | - | | | STORM_backend | emi-storm-backend-mp | - | | | STORM_frontend | emi-storm-frontend-mp | - | | | STORM_checksum | emi-storm-checksum-mp | - | | | STORM_gridhttps | emi-storm-gridhttps-mp | - | | | STORM_globus_gridftp | emi-storm-globus-gridftp-mp | - | | | STORM_srm_client | emi-storm-srm-client-mp | - | | | TORQUE WN config | emi-torque-client | - | | | TORQUE server config | emi-torque-server | - | | | User Interface | emi-ui | - | | | UNICORE/X | unicore-unicorex6 | - | | | UNICORE-UCC6 | unicore-ucc6 | - | | | UNICORE Gateway6 | unicore-gateway6 | - | | | UNICORE-HILA | unicore-hila-emi-es <br> unicore-hila-gridftp <br> unicore-hila-shell <br> unicore-hila-unicore6 | - | | | UNICORE Registry6 | unicore-registry6 | - | | | UNICORE TSI6 | unicore-tsi6 | - | | | UNICORE XUUDB | unicore-xuudb | - | | | UNICORE UVOS | unicore-uvos-clc <br> unicore-uvos-server <br> unicore-uvos-webapp <br> unicore-uvos-webauth | - | | | VOMS_mysql | emi-voms-mysql | - | | | VOMS_oracle | emi-voms-oracle | - | | | WMS | emi-wms | - | | | WNODES | wnodes_bait <br> wnodes_hypervisor <br> wnodes_manager <br> wnodes_nameserver <br> wnodes_site_specific <br> wnodes_utils | - | | | Worker Node | emi-wn | - | | ---+ Configuring the Middleware ---++Using the YAIM configuration tool Some of EMI services can be configured using the YAIM tool. For a detailed description on how to configure the middleware with YAIM, please check the individual products/services guides and the *YAIM Guide*: * old guide - [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400]] * new guide - WorkInProgress [[https://twiki.cern.ch/twiki/bin/view/EMI/EMIYaim][https://twiki.cern.ch/twiki/bin/view/EMI/EMIYaim]] The YAIM-modules needed to configure a certain service/product are automatically installed with the middleware. However, if you want to install YAIM packages separately, you can install them by running _yum install glite-yaim-<node-type>_. This will automatically install the YAIM module you are interested in together with yaim-core, which contains the core functions and utilities used by all the YAIM modules.. ---++ Configuration information The table bellow lists the configuration instructions for some of EMI services: #ConfigTarget | *Node Type/Service* | *Comments* | | AMGA_postgresql | yaim configuration target "AMGA_postgresql" <br> [[https://twiki.cern.ch/twiki/pub/EMI/AMGA/amga-manual_2_3_0.pdf][https://twiki.cern.ch/twiki/pub/EMI/AMGA/amga-manual_2_3_0.pdf]] | | APEL publisher | yaim configuration target "APEL" <br>use [[https://twiki.cern.ch/twiki/pub/EMI/APELClient/Publisher_System_Administrator_Guide_v1.0.0.pdf][https://twiki.cern.ch/twiki/pub/EMI/APELClient/Publisher_System_Administrator_Guide_v1.0.0.pdf]] | | ARC-CE | [[http://www.nordugrid.org/documents/arc-server-install.html][http://www.nordugrid.org/documents/arc-server-install.html]] <br> [[http://www.nordugrid.org/documents/arex_tech_doc.pdf][http://www.nordugrid.org/documents/arex_tech_doc.pdf]] | | ARC Clients | [[http://www.nordugrid.org/documents/arc-client-install.html#configure][arc* tools]] <br> [[http://www.nordugrid.org/documents/arc-ui.pdf][ARC Client Configuration]] <br> [[http://www.nordugrid.org/documents/ui.pdf][Section "Configuration"]] | | ARC !InfoSys | [[http://www.nordugrid.org/documents/arc_infosys.pdf][http://www.nordugrid.org/documents/arc_infosys.pdf]] | | ARGUS | yaim config target "ARGUS_server" <br> [[https://twiki.cern.ch/twiki/bin/view/EGEE/ArgusEMIDeployment][https://twiki.cern.ch/twiki/bin/view/EGEE/ArgusEMIDeployment]]] | | BDII_site | yaim config target "BDII_site" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | BDII_top | yaim config target "BDII_top" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | CLUSTER | [[http://wiki.italiangrid.org/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI1#1_4_4_Configuration_of_a_glite_C][ CLUSTER config]] | | CREAM | yaim config target "creamCE" <br> [[http://wiki.italiangrid.org/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI1#1_4_Configuration][CREAM Configuration]] | | CREAM LSF module | yaim config target 'LSF_utils" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | DPM mysql | yaim config target "emi_dpm_mysql" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/Configuration#ConfiguringaHeadNode][specific HEAD_node configuration]] | | DPM disk | yaim config target "emi_dpm_disk" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm/Admin/Configuration#ConfiguringaDiskNode][specific DISK_node configuration]] | | FTS oracle | yaim config target "emi_fts2" "emi_fta2", "emi_ftm2" <br> [[https://svnweb.cern.ch/trac/glitefts/wiki/FTSYaimReference_2_2_4][Full YAIM reference for FTS 2.2.6]] | | GLEXEC_wn | yaim config target "GLEXEC_wn" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> The GLEXEC_wn should always be installed together with a WN. | | LB | yaim config target "LB" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[http://egee.cesnet.cz/cvsweb/LB/LBAG.pdf][more info]] | | LFC mysql | yaim config target "emi_lfc_mysql" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://svnweb.cern.ch/trac/lcgdm/wiki/Lfc/Admin/Configuration][specific configuration]] | | LFC oracle | yaim config target "emi_lfc_oracle" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://svnweb.cern.ch/trac/lcgdm/wiki/Lfc/Admin/Configuration][specific configuration]] | | MPI_utils | for CE configuration see [[http://grid.ifca.es/wiki/Middleware/MpiStart/MpiUtils#CE_Configuration][http://grid.ifca.es/wiki/Middleware/MpiStart/MpiUtils#CE_Configuration]] <br> for WN configuration see [[http://grid.ifca.es/wiki/Middleware/MpiStart/MpiUtils#WN_Configuration][http://grid.ifca.es/wiki/Middleware/MpiStart/MpiUtils#WN_Configuration]] | | PX (MyProxy) | yaim config target "PX" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | STORM_backend | yaim config target 'SE_storm_backend" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | STORM_frontend | yaim config target 'SE_storm_frontend" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | STORM_checksum | yaim config target 'SE_storm_checksum" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | STORM_gridhttps | yaim config target 'SE_storm_gridhttps" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | STORM_globus_gridftp | yaim config target 'SE_storm_globus_gridftp" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | STORM_srm_client | | | TORQUE WN config | yaim config target 'TORQUE_client" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | TORQUE server config | yaim config target "TORQUE_server" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | CREAM TORQUE module | yaim config target "TORQUE_utils" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] | | UI | yaim config target "UI" <br> [[https://twiki.cern.ch/twiki/bin/view/EMI/GenericInstallationConfigurationEMI1#The_UI][see details bellow]] | | UNICORE/X | | | UNICORE-UCC | | | UNICORE Gateway | | | UNICORE-HILA | | | UNICORE Registry | | | UNICORE TSI | | | UNICORE XUUDB | | | UNICORE UVOS | | | VOMS_mysql | yaim config target 'VOMS_mysql" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://twiki.cern.ch/twiki/bin/view/EMI/VOMSystemAdministratorGuide][more information]] | | VOMS_oracle | yaim config target 'VOMS_oracle" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://twiki.cern.ch/twiki/bin/view/EMI/VOMSystemAdministratorGuide][more information]] | | WMS | yaim config target 'WMS" <br> [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][use yaim]] <br> [[https://wiki.italiangrid.it/twiki/bin/view/WMS/WMSSystemAdministratorGuide#1_3_2_Configuration_of_a_WMS_nod][more details on WMS config file]] | | WN | yaim config target 'WN" <br> see details bellow for configuring them for different batch systems | ---+++ The LSF batch system You have to make sure that the necessary packages for submitting jobs to your LSF batch system are installed on your CE. By default, the packages come as tar balls. At CERN they are converted into rpms so that they can be automatically rolled out and installed in a clean way (in this case using Quattor). Since LSF is a commercial software it is not distributed together with the gLite middleware. Visit the [[http://www.platform.com/Products/Platform.LSF.Family/][Platform's LSF home page]] for further information. You'll also need to buy an appropriate number of license keys before you can use the product. The documentation for LSF is available on [[http://www.platform.com/Support/Product.Manuals.htm][Platform Manuals]] web page. You have to register in order to be able to access it. ---++++ The CREAM for LSF * follow the [[http://wiki.italiangrid.org/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI2#1_4_Configuration][CREAM Configuration Guide]] ---++++ The WN for LSF Apart from the LSF specific configurations settings there is nothing special to do on the worker nodes. \\ After installing: <verbatim> # yum install emi-wn # /opt/glite/yaim/bin/yaim -c -s site-info.def -n WN </verbatim> just use the plain WN configuration target. <verbatim> /opt/glite/yaim/bin/yaim -c -s site-info.def -n WN </verbatim> ---++++ Note on site-BDII for LSF When you configure your site-BDII you have to populate the [vomap] section of the =/etc/lcg-info-dynamic-scheduler.conf= file yourself. This is because LSF's internal group mapping is hard to figure out from yaim, and to be on the safe side the site admin has to crosscheck. Yaim configures the lcg-info-dynamic-scheduler in order to use the LSF info provider plugin which comes with meaningful default values. If you would like to change it edit the =/etc/glite-info-dynamic-lsf.conf= file. After YAIM configuration you have to list the LSF group - VOMS FQAN - mappings in the [vomap] section of the =/etc/lcg-info-dynamic-scheduler.conf= file. As an example you see here an extract from CERN's config file: <verbatim> vomap : grid_ATLAS:atlas grid_ATLASSGM:/atlas/Role=lcgadmin grid_ATLASPRD:/atlas/Role=production grid_ALICE:alice grid_ALICESGM:/alice/Role=lcgadmin grid_ALICEPRD:/alice/Role=production grid_CMS:cms grid_CMSSGM:/cms/Role=lcgadmin grid_CMSPRD:/cms/Role=production grid_LHCB:lhcb grid_LHCBSGM:/lhcb/Role=lcgadmin grid_LHCBPRD:/lhcb/Role=production grid_GEAR:gear grid_GEARSGM:/gear/Role=lcgadmin grid_GEANT4:geant4 grid_GEANT4SGM:/geant4/Role=lcgadmin grid_UNOSAT:unosat grid_UNOSAT:/unosat/Role=lcgadmin grid_SIXT:sixt grid_SIXTSGM:/sixt/Role=lcgadmin grid_EELA:eela grid_EELASGM:/eela/Role=lcgadmin grid_DTEAM:dteam grid_DTEAMSGM:/dteam/Role=lcgadmin grid_DTEAMPRD:/dteam/Role=production grid_OPS:ops grid_OPSSGM:/ops/Role=lcgadmin module_search_path : ../lrms:../ett </verbatim> ---+++ The Torque/PBS batch system ---++++ TORQUE Server * if you want to have a dedicated node for the TORQUE server: <verbatim> # yum install emi-torque-server emi-torque-utils # /opt/glite/yaim/bin/yaim -c -s site-info.def -n TORQUE_server -n TORQUE_utils </verbatim> * if you want to install configure the TORQUE server on the same node as the CREAM Computing Element: <verbatim> # yum install emi-cream-ce emi-torque-server emi-torque-utils # /opt/glite/yaim/bin/yaim -c -s site-info.def -n creamCE -n TORQUE_server -n TORQUE_utils </verbatim> For more details see the "CREAM System Administrator Guide": [[http://wiki.italiangrid.org/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI3][http://wiki.italiangrid.org/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI3]] ---++++ The WN for Torque/PBS <verbatim> # yum install emi-wn emi-torque-client # /opt/glite/yaim/bin/yaim -c -s site-info.def -n WN -n TORQUE_client </verbatim> ---++ The UI <verbatim> # yum install emi-ui # /opt/glite/yaim/bin/yaim -c -s site-info.def -n UI </verbatim> * [[%ATTACHURL%/EMI_GenericInstallationConfigurationEMI3_v_3_0_0_1.pdf][EMI_GenericInstallationConfigurationEMI3_v_3_0_0_1.pdf]]: EMI_GenericInstallationConfigurationEMI3_v_3_0_0_1.pdf
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r14
<
r13
<
r12
<
r11
<
r10
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r11 - 2013-10-03
-
DoinaCristinaAiftimiei
Log In
EMI
EMI Web
News
Events
Procedures and Tools
Mailing Lists
Documents
Project Structure
PEB
ECB
PTB
EMT
NA1
NA2
NA3
SA1
SA2
JRA1
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
Altair
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Cern Search
TWiki Search
Google Search
EMI
All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback