YAIM configuration variables
site-info.def configuration variables
The following tables contain a list of variables used to configure most of the yaim modules. General variables can be found in:
-
/opt/glite/yaim/examples/siteinfo/site-info.def
: general variables that need a value specific to the site and that must be configured by the site admin.
-
/opt/glite/yaim/defaults/site-info.pre
: general variables that have a meaningful default value and do not need to be changed unless the site admin is interested in a more advanced configuration.
-
/opt/glite/yaim/defaults/site-info.post
: the same as site-info.pre
but sourced after the two previous files. These allows to define variables whose default value depend on variables like INSTALL_ROOT
.
In order to know whether a variable is compulsory in the configuration of a node type or not, please check the relevant node type section in this page, where you can find a description of which set of variables is actually needed for each node type.
NOTE: The following distribution in
site-info.def
,
site-info.pre
and
site-info.post
is available for yaim core >= 4.0.5-1. In lower versions the distribution may be different (most of the variables were distributed in
site-info.def
) but the meaning of the variables is the same and you can still search for variables in this document for any version of yaim core.
site-info.def
Variable name |
Description |
Example |
YAIM version >= |
APEL_DB_PASSWORD |
Database password for APEL |
APEL_DB_PASSWORD=mypassword |
3.0.1-0 |
ARGUS_PEPD_ENDPOINTS |
Required when the ARGUS PEP client is to be configured. It is the PEP Server endpoint URL which the ARGUS client should use. |
ARGUS_PEPD_ENDPOINTS="https://pepd.example.org:8154/authz" |
4.0.12-1 |
BATCH_LOG_DIR |
Batch system log directory. for Torque/PBS, this must be set to the directory containing the server_logs directory; usually /var/torque |
BATCH_LOG_DIR=/var/spool/pbs |
4.0.0-12 |
BATCH_SERVER |
Batch server hostname |
BATCH_SERVER=yaim-pbs.cern.ch |
3.1.1-1 |
BATCH_SPOOL_DIR |
Your batch system's log directory. It will replaced by BATCH_LOG_DIR in glite-yaim-core >= 4.0.0-12. |
deprecated |
deleted glite-yaim-core >= 4.0.0-12 |
BATCH_VERSION |
The version of the Local Resource Managment System |
BATCH_VERSION=torque-2.3.0 |
3.0.1-0 |
BDII_DELETE_DELAY |
The cache period for LDAP records that disappeared from the BDII's input; by default it should be zero, but due to a bug affecting some versions of EMI-1 node types, the admin may need to define it explicitly |
BDII_DELETE_DELAY=0 |
|
BDII_HOST |
BDII hostname |
BDII_HOST=yaim-bdii.cern.ch |
3.0.1-0 |
BDII_LIST |
Optional variable to define a list of top level BDIIs to support the automatic failover in the GFAL clients and information system tools. The syntax is my-bdii1.$MY_DOMAIN:port1[,my-bdii22.$MY_DOMAIN:port2[...]] . A list of BDIIs is supported by GFAL, lcg_util, lcg-info, lcg-infosites, lcg-ManageVOTag, lcg-tags and glite-sd-query. |
BDII_LIST="yaim-bdii.cern.ch:2170,other-bdii.cern.ch:2170" |
4.0.5-1 |
CE_BATCH_SYS |
Batch system used by the CE. Possible values are 'torque'', 'lsf', 'pbs', 'condor' and 'sge' |
CE_BATCH_SYS=torque |
3.0.1-0 |
CE_CAPABILITY |
This YAIM variable is a blank separated list and is used to set the GlueCECapability attribute where: 1) CPUScalingReferenceSI00=<referenceCPU-SI00>: this is the reference CPU SI00 that has to be calculated in two possible ways: a) If the batch system scales the published CPU time limit (GlueCEPolicyMaxCPUTime) to a reference CPU power then CPUScalingReferenceSI00 should be the SI00 rating for that reference; b) If the batch system does not scale the time limit then CPUScalingReferenceSI00 should be the SI00 rating of the least powerful core in the cluster. Sites which have moved to the HEP-SPEC benchmark should use it but converted to SI00 units using the scaling factor of 250, i.e. SI00 = 250*HEP-SPEC. 2) Share=<vo-name>:<vo-share>: this value is used to express VO fairshares targets. If there is no special share, this value MUST NOT be published. <vo-share> can assume values between 1 and 100 (it represents a percentage). Please note that the sum of the shares over all WLCG VOs MUST BE less than or equal to 100. If the worker nodes behind the CE provide the glexec facility (for WLCG VOs), an extra capability glexec must be added to the list. For lcg-CE >= 3.1.42 this variable may be set in /opt/glite/yaim/examples/siteinfo/services/lcg-ce instead of in site-info.def |
CE_CAPABILITY="CPUScalingReferenceSI00=100 Share=dteam:20 Share=atlas:10 glexec" |
4.0.7-1 |
CE_CPU_MODEL |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostProcessorModel attribute. System administrators MUST set this variable to the name of the processor model as defined by the vendor for the Worker Nodes in a SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical processor model for the nodes of a SubCluster. |
CE_CPU_MODEL=Xeon |
3.0.1-0 |
CE_CPU_SPEED |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostProcessorClockSpeed attribute. System administrators MUST set this variable to the name of the processor clock speed expressed in MHz for the Worker Nodes in a SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typiceal processor for the nodes of a SubCluster. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous. |
CE_CPU_SPEED=2334 |
3.0.1-0 |
CE_CPU_VENDOR |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostProcessorVendor attribute. System administrators MUST set this variable to the name of the processor vendor for the Worker Nodes in a SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical processor for the nodes of a SubCluster. |
CE_CPU_VENDOR=intel |
3.0.1-0 |
CE_HOST |
Computing Element Hostname |
CE_HOST=yaim-ce.cern.ch |
3.0.1-0 |
CE_DATADIR |
This YAIM variable is used to set the GlueCEInfoDataDir attribute. This is an optional variable that can be left undefined. Otherwise, system administrators should set it to the path of a shared directory available for application data. Typically a POSIX accessible transient disk space shared between the Worker Nodes. It may be used by MPI applications or to store intermediate files that need further processing by local jobs or as staging area, specially if the Worker Node have no internet connectivity. |
CE_DATADIR=/mypath |
3.0.1-0 |
CE_INBOUNDIP |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostNetworkAdapterInboundIP attribute. System administrators MUST set this variable to either FALSE or TRUE (in uppercase !) to express the permission for inbound connectivity for the WNs in the SubCluster, even if limited. |
CE_INBOUNDIP=FALSE |
3.0.1-0 |
CE_LOGCPU |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueSubClusterLogicalCPUs . System administrators MUST set this variable to the value of the “Total number of cores/hyperthreaded CPUs in the SubCluster, including the nodes part of the SubCluster that are temporary down or offline”. In order to overcome the current YAIM limitation when a new CE head node giving access to the same batch resources is added to a site, site admins MUST set the CE_LOGCPU YAIM variable to 0 if the resources used by the new subclusters are already published via another CE. |
CE_LOGCPU=1472 |
4.0.3-9 |
CE_MINPHYSMEM |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostMainMemoryRAMSize attribute. System administrators MUST set this variable to the Total physical memory of a WN in the SubCluster expressed in MegaBytes. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. It is advisable to publish here the minimum total physical memory of the WNs in the SubCluster expressed in MegaBytes. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous. |
CE_MINPHYSMEM=16000 |
3.0.1-0 |
CE_MINVIRTMEM |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostMainMemoryVirtualSize attribute. System administrators MUST set this variable to the Total virtual memory of a WN in the SubCluster expressed in MegaBytes. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. It is advisable to publish here the minimum total virtual memory of the WNs in the SubCluster expressed in MegaBytes. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous. |
CE_MINVIRTMEM=32000 |
3.0.1-0 |
CE_OS |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostOperatingSystemName attribute . System administrators MUST set this variable to the name of the operating system used on the Worker Nodes part of the SubCluster. - see https://wiki.egi.eu/wiki/Operations/HOWTO05 |
CE_OS="ScientificSL" |
3.0.1-0 |
CE_OS_RELEASE |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostOperatingSystemRelease attribute. System administrators MUST set this variable to the release of the operating system used on the Worker Nodes part of the SubCluster - see https://wiki.egi.eu/wiki/Operations/HOWTO05 |
CE_OS_RELEASE=6.3 |
3.0.1-0 |
CE_OS_VERSION |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostOperatingSystemVersion attribute. System administrators MUST set this variable to the version of the operating system used on the Worker Nodes part of the SubCluster - see https://wiki.egi.eu/wiki/Operations/HOWTO05 |
CE_OS_VERSION=Carbon |
3.0.1-0 |
CE_OS_ARCH |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostArchitecturePlatformType attribute. System administrators MUST set this variable to the Platform Type of the WN in the SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. More information can be found here: https://wiki.egi.eu/wiki/Operations/HOWTO06 |
CE_OS_ARCH=i686 |
3.0.1-0 |
CE_OTHERDESCR |
This YAIM variable is used to set the GlueHostProcessorOtherDescription attribute. The value of this variable MUST be set to: Cores=<typical-number-of-cores-per-CPU>[,Benchmark=<your-value>-HEP-SPEC06] where <typical-number-of-cores-per-CPU> is equal to the number of cores per CPU of a typical Worker Node in a SubCluster. The second value of this attribute MUST be published only in the case the CPU power of the SubCluster is computed using the Benchmark HEP-SPEC06. The syntax is Cores=value[,Benchmark=value-HEP-SPEC06] . |
CE_OTHERDESCR="Cores=4,Benchmark=100-HEP-SPEC06" |
4.0.7-1 |
CE_OUTBOUNDIP |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostNetworkAdapterOutboundIP attribute. System administrators MUST set this variable to either FALSE or TRUE (in uppercase !) to express the permission for direct outbound connectivity for the WNs in the SubCluster, even if limited. |
CE_OUTBOUNDIP=FALSE |
3.0.1-0 |
CE_PHYSCPU |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueSubClusterPhysicalCPUs . System administrators MUST set this variable to the value of the “Total number of real CPUs/physical chips in the SubCluster, including the nodes part of the SubCluster that are temporarily down or offline”. In order to overcome the current YAIM limitation when a new CE head node giving access to the same batch resources is added to a site, site admins MUST set the CE_PHYSCPU YAIM variable to 0 if the resources used by the new subclusters are already published via another CE. |
CE_PHYSCPU=736 |
4.0.3-9 |
CE_RUNTIMEENV |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostApplicationSoftwareRunTimeEnvironment . It should define a space separated list of software tags supported by the site. The list can include VO-specific software tags. In order to ensure backwards compatibility it should include the entry 'LCG-2', the current middleware version and the list of previous middleware tags |
CE_RUNTIMEENV="LCG-2 LCG-2_1_0 LCG-2_1_1 LCG-2_2_0 LCG-2_3_0 LCG-2_3_1 LCG-2_4_0 LCG-2_5_0 LCG-2_6_0 LCG-2_7_0 GLITE-3_0_0 GLITE-3_1_0 R-GMA" |
3.0.1-0 |
CE_SF00 |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostBenchmarkSF00 attribute. It's the performance index of your fabric in SpecFloat 2000. For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html |
CE_SF00=0 |
3.0.1-0 |
CE_SI00 |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostBenchmarkSI00 attribute . It's the performance index of your fabric in SpecInt 2000. System administrators MUST set this variable as indicated in page 5 of https://twiki.cern.ch/twiki/pub/LCG/WLCGCommonComputingReadinessChallenges/WLCG_GlueSchemaUsage-1.8.pdf. For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html |
CE_SI00=381 |
3.0.1-0 |
CE_SMPSIZE |
Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostArchitectureSMPSize attribute. System administrators MUST set this variable to the number of Logical CPUs (cores) of the WN in the SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous. |
CE_SMPSIZE=2 |
3.0.1-0 |
CLASSIC_STORAGE_DIR |
The root storage directory on CLASSIC_HOST . This variable is no longer used after introducing SE_MOUNT_INFO_LIST . See bug 33210 for lcg CE and bug 46681 for cream CE to check in which yaim module version the SE_MOUNT_INFO_LIST was used so you can deprecate this variable. |
deprecated |
3.0.1-0 |
CREAM_PEPC_RESOURCEID |
If specified and configuration of ARGUS PEP client is enabled then yaim will configure ARGUS on the cream CE, otherwise ARGUS setup is skipped on that node. The variable specifies the ARGUS resource ID to be used. |
CREAM_PEPC_RESOURCEID=urn:mysitename.org:resource:ce |
4.0.12-1 |
DPM_HOST |
Host name of the DPM host |
DPM_HOST=yaim.dpm.cern.ch |
3.0.1-0 |
FTS_HOST |
FTS server hostname. It's deprecated. See the FTS section of this twiki to know which variables are needed to configure a FTS. |
deprecated |
3.0.1-0 |
FTS_SERVER_URL |
The URL of the File Transfer Service server |
FTS_SERVER_URL="https://yaim-fts.cern.ch:8443/path/glite-data-transfer-fts" |
3.0.1-0 |
GENERAL_PEPC_RESOURCEID |
If specified and configuration ARGUS is enabled then yaim will configure ARGUS PEP clients on nodes (where supported). Otherwise ARGUS PEP client setup is skipped. The variable specifies the ARGUS resource ID to be used. The cream CE and WMS have their own node specific version of this variable, and GLEXEC on the WN is controlled by other variables. So this generic variable is not used in the configuration of those node types. |
GENERAL_PEPC_RESOURCEID=urn:mysitename.org:resource:other |
4.0.12-1 |
GLITE_EXTERNAL_ROOT |
The directory where the TAR UI and TAR WN install the external dependencies. Please, check the TAR UI and TAR WN installation instructions for more details. Note that GLITE_EXTERNAL_ROOT=${INSTALL_ROOT}/external is the only configuration that has been tested. |
GLITE_EXTERNAL_ROOT=${INSTALL_ROOT}/external |
3.0.1-0 |
GLITE_USER_HOME |
This variable will be deprecated in the future. From yaim-core >= 4.0.5-3 it defaults to GLITE_HOME_DIR . Please see yaim core 4.0.5-7 Known Issues if you are using yaim core <= 4.0.7-7 or yaim wms <= 4.0.5-2 |
GLITE_USER_HOME=/home/glite |
3.0.1-0 |
GRIDICE_SERVER_HOST |
GridIce server hostname. Only used in 3.0 configurations. |
GRIDICE_SERVER_HOST=my-gridice.cern.ch |
3.0.1-0 |
GROUPS_CONF |
Path to the file containing information on the mapping between VOMS groups and roles to local groups. An example of this configuration file is given in /opt/glite/yaim/examples/groups.conf . More details can be found in the Group configuration section in the YAIM guide. |
GROUPS_CONF=/opt/glite/etc/groups.conf |
3.0.1-0 |
JOB_MANAGER |
The name of the job manager used by the gatekeeper. Must be one of: lcgpbs, lcglsf, lcgsge, lcgcondor, lsf, pbs or condor. For a CREAM CE and glite-Cluster instead specify one of: pbs, lsf, sge or condor (no "lcg" version) |
JOB_MANAGER=lcgpbs |
3.0.1-0 |
LB_HOST |
LB hostname. It won't be anymore mandatory for UI and VOBOX, only for WMS configuration. See more information in the variable list of each node type |
LB_HOST=yaim-lb.cern.ch |
3.0.1-0 |
LOCAL_GROUPS_CONF |
Optional variable to specify a local groups.conf. It is similar to GROUPS_CONF but used to specify a separate file where local accounts specific to the site are defined. More details can be found in the Group configuration section in the YAIM guide. |
LOCAL_GROUPS_CONF=/opt/glite/yaim/etc/local.conf |
4.0.5-1 |
MON_HOST |
RGMA hostname. |
MON_HOST=yaim-mon.cern.ch |
3.0.1-0 |
MYSQL_PASSWORD |
The mysql root password. Define it only if you are installing a mysql server. |
MYSQL_PASSWORD=password |
3.0.1-0 |
PX_HOST |
Myproxy hostname. |
PX_HOST=yaim-px.cern.ch |
3.0.1-0 |
QUEUES |
The name of the queues defined in the CE |
QUEUES="dteam atlas" |
3.0.1-0 |
<queue-name>_GROUP_ENABLE |
Space separated list of VO names and VOMS FQANs which are allowed to access the queue. |
DTEAM_GROUP_ENABLE="dteam /dteam/Higgs /dteam/ROLE=production" |
3.0.1-0 |
RB_HOST |
Resource Broker hostname. |
RB_HOST=yaim-rb.cern.ch |
3.0.1-0 |
RFIO_PORT_RANGE |
Optional variable for the rfio port range |
RFIO_PORT_RANGE="20000,25000" |
3.0.1-0 |
SE_GRIDFTP_LOGFILE |
Variable necessary to configure the gridview client on the SEs. It sets the location and filename of the gridftp server logfile on the different types of SEs. |
SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log |
4.0.3-9 |
SE_LIST |
A space separated list of SE hostnames available at your site |
SE_LIST="dpm.cern.ch castor.cern.ch" |
3.0.1-0 |
SE_MOUNT_INFO_LIST |
This YAIM variable is used to set the GlueCESEBindMountInfo attribute for each defined SE. The variable is a space separated list of SE hosts from SE_LIST with the export directory from the Storage Element and the mount directory common to worker nodes part of the Computing Element like SE1:export_dir1,mount_dir1. If any SE from SE_LIST doesn't support he mount concept, don't define anything for that SE in this variable. If this is the case for all the SEs in SE_LIST , put the value none . The GlueCESEBindMountInfo will be in both cases "n.a". Please, note that in the way the glue schema is specified, a SE can only have one mount point. See also Bug 54530 affecting this variable. |
SE_MOUNT_INFO_LIST="se1.cern.ch:/data/atlas,/storage/atlas se2.cern.ch:/data/dteam,/storage/dteam" |
4.0.7-1 |
SITE_BDII_HOST |
The site BDII host name |
SITE_BDII_HOST=yaim-site.cern.ch |
4.0.0 |
SITE_EMAIL |
This YAIM variable is used to set the GlueSiteEmailContact attribute. It's the main email contact for the site. The syntax is a coma separated list of email addresses. |
SITE_EMAIL=yaim-contact@cern.ch,admin-yaim@cern.ch |
3.0.1-0 |
SITE_HTTP_PROXY |
Optional variable to specify whether your site has an http proxy (syntax is as that of the http_proxy environment variable). It will be used in config_crl and used by the cron jobs (http_proxy) in order to reduce to load on the CA host. |
SITE_HTTP_PROXY="http-proxy.my.domain |
3.0.1-0 |
SITE_INFO_VERSION |
Optional variable to specify the version of the set of configuration files (site-info.def, vo.d/, group.d/, nodes/ and local functions) that the sys admin can package under one rpm. It's in fact the rpm version. This variable is used when executing the option -p of the yaim command. Note that this variable has to be defined in site-info.def and not in any other configuration file in the siteinfo directory. |
SITE_INFO_VERSION=1.1 |
4.0.3-5 |
SITE_LAT |
This YAIM variable is used to set the GlueSiteLatitude attribute. It's the position of the site north or south of the equator measured from -90º to 90º with positive values going north and negative values going south. |
SITE_LAT=46.20 |
3.0.1-0 |
SITE_LONG |
This YAIM variable is used to set the GlueSiteLongitude attribute. It's the position of the site east or west of Greenwich, England measured from -180º to 180º with positive values going east and negative values going west. |
SITE_LONG=6.1 |
3.0.1-0 |
SITE_NAME |
This YAIM variable is used to set the GlueSiteName attribute. It's the human-readable name of your site. |
SITE_NAME=yaim-testbed |
3.0.1-0 |
SPECIAL_POOL_ACCOUNTS |
Optional variable. It determines the use of pool accounts for special users when generating the grid-mapfile. If not defined, YAIM will decide whether to use special pool accounts or not automatically. The value is yes or no |
SPECIAL_POOL_ACCOUNTS=yes |
4.0.5-1 |
USE_ARGUS |
Optional variable. When set to yes indicates that setup of the ARGUS authorisation framework is to be done. A number of other variables are required to fully specify the ARGUS parameters and allow the configuration to be made. See the "ARGUS authorisation framework control" section in the definition file, where the variables are grouped together. Currently the enabling of ARGUS on the WN is independent of this option. |
USE_ARGUS=no |
4.0.12-1 |
USER_HOME_PREFIX |
Optional variable used to specify a home directory for the pool accounts different from /home. The directory must exist in the system. YAIM is not creating it. If it doesn't exist, when trying to add the users, the yaim command will fail. So sys admins must ensure the directory specified by this variable already exists. See below in the VO related variables the usage of this variable per VO. If the variable is defined for a certain VO, that value will have priotity over this one. |
USER_HOME_PREFIX=/special/dir/ |
4.0.4-1 |
USERS_CONF |
Path to the file containing the list of Linux users (pool accounts) to be created. This file should be created by the site administrator. It contains a plain list of the users and their IDs. An example of this configuration file is given in /opt/glite/yaim/examples/users.conf . More details can be found in the User configuration section in the YAIM guide. |
USERS_CONF=/opt/glite/yaim/etc/users.conf |
3.0.1-0 |
VOS |
List of supported VOs |
VOS="dteam atlas" |
3.0.1-0 |
VO_SW_DIR |
Base directory for installation of the experiment software. It's normally used in combination of a VO related variable. |
VO_SW_DIR=/opt/exp_soft |
3.0.1-0 |
WMS_PEPC_RESOURCEID |
If specified and configuration of ARGUS PEP client is enabled then yaim will configure ARGUS on the WMS, otherwise ARGUS setup is skipped on that node. The variable specifies the ARGUS resource ID to be used. |
WMS_PEPC_RESOURCEID=urn:mysitename.org:resource:wms |
4.0.12-1 |
WN_LIST |
Path to the list of Worker Nodes. The list of Worker Nodes is a file to be created by the site administrator. An example of this configuration file is given in /opt/glite/yaim/examples/wn-list.conf. For more information please check the WN list section in the YAIM guide. |
WN_LIST=/opt/glite/yaim/etc/wn.conf |
3.0.1-0 |
VO related variables
Note : Exceptionally, LCG VOs have VO related variables distributed in
site-info.def
. For EGEE VOs, it's recommendable that sys admin check the
CIC portal VO Id card
information to know which values they should use to configure their VO related variables.
Note that
<vo-name>
should be in capital letters and in case '.' and '-' are part of the vo name, they should be transformed into '_'. For example, a vo called org.yaim.vo should define its variables as
VO_ORG_YAIM_VO_NAME_*
. For more information on VO variables please check the
vo.d directory section in the YAIM guide.
Variable name |
Description |
YAIM version >= |
VO_<vo-name>_DEFAULT_SE |
Default SE used by the VO. |
3.0.1-0 |
VO_<vo-name>_LB_HOSTS |
Optional variable to specify a space separated list of LBs hostname:port supported by the VO. |
glite-yaim-clients 4.0.3-3 |
VO_<vo-name>_MAP_WILDCARDS |
Optional variable to automatically add wildcards per FQAN in the LCMAPS gripmap file and groupmap file. Set it to 'yes' if you want to add the wildcards in your VO. Leave it undefined or set it to 'no' if you don't want to configure wildcards in your VO. |
4.0.5-5 |
VO_<vo-name>_PX |
Myproxy server supported by the VO. |
glite-yaim-clients 3.1.0-0 |
VO_<vo-name>_RBS |
A space separated list of RBs hostname supported by the VO. |
3.0.1-0 |
VO_<vo-name>_STORAGE_DIR |
SE classic is no longer part of gLite 3.1. Path to the storage area for the VO on an SE_classic. |
deprecated in 3.1 |
VO_<vo-name>_SW_DIR |
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ). Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. |
3.0.1-0 |
VO_<vo-name>_UNPRIVILEGED_MKGRIDMAP |
Optional variable to create a grid-map file which only contains mappings to ordinary users for the VO. no will create a grid-map file with special users as well, if defined in groups.conf. yes , will create a grid-mapfile containing only mappings to ordinary pool accounts. |
4.0.10-1 |
VO_<vo-name>_USER_HOME_PREFIX |
Optional variable used to specify a home directory for the pool accounts different from /home. The directory must exist in the system. YAIM is not creating it. If it doesn't exist, when trying to add the users, the yaim command will fail. So sys admins must ensure the directory specified by this variable already exists. |
4.0.11-1 |
VO_<vo-name>_VOMSES |
This variable contains the vomses file parameters needed to contact a VOMS server. Multiple VOMS servers can be given if the parameters are enclosed in single quotes. The syntax should be 'vo_nickname voms_server_hostname port voms_server_host_cert_dn vo_name gt_version' where gt_version is optional and it refers to the version of Globus Toolkit the VOMS sever is running. This argument is needed to know how to contact the VOMS server, which is done in a different way depending on the GT version it's running. YAIM supports a nickname as first argument (rather than requiring it to be the same as vo_name) since version 4.0.12-1. |
3.0.1-0 |
VO_<vo-name>_VOMS_EXTRA_MAPS |
Optional variable used to define any further arbitrary maps you need in edg-mkgridmap.conf. |
Deprecated glite-yaim-core >= 4.0.4-2 |
VO_<vo-name>_VOMS_CA_DN |
DN of the CA that signs the VOMS server certificate. Multiple values can be given if enclosed in single quotes. Note that there must be as many entries as in the VO_<vo-name>_VOMSES variable. There's a one to one relationship in the elements of both lists, so the order must be respected. |
4.0.3-6 |
VO_<vo-name>_VOMS_SERVERS |
A list of the VOMS servers used to create the DN grid-map file. The format is 'vomss://<host-name>:8443/voms/<vo-name>?/<vo-name> . |
3.0.1-0 |
VO_<vo-name>_WMS_HOSTS |
Optional variable to specify a space separated list of WMSs hostname supported by the VO. |
glite-yaim-clients 4.0.3-3 |
site-info.pre
Variable name |
Description |
Default value |
YAIM version >= |
BATCH_BIN_DIR |
The path of the lrms commands |
/usr/bin |
3.0.1-0 |
BDII_ARCHIVE_SIZE |
It is the number of dumps of the database to keep for debugging purposes. This variable is actually taken into account when using BDII version 5. |
0 |
4.0.8-1 |
BDII_BREATHE_TIME |
It is the time in seconds between updates of the bdii. |
120 |
4.0.8-1 |
BDII_GROUP |
BDII user group |
edguser |
4.0.5-1 |
BDII_HOME_DIR |
BDII user home directory. Note: It is recommendable to use /var/lib/user_name as HOME directory for system users. |
/home/edguser |
4.0.10-1 |
BDII_PASSWD |
This is the password for the LDAP database used by the bdii. This variable is actually taken into account when using BDII version 5. |
$(mkpasswd -s 0) |
4.0.8-1 |
BDII_READ_TIMEOUT |
It is the amount of time to wait until an information is assumed to have timed out. This variable is actually taken into account when using BDII version 5. |
300 |
4.0.8-1 |
BDII_RESOURCE_TIMEOUT |
The timeout value to be used with a resource BDII. It is the time the BDII will wait when running the GIP |
$BDII_SITE_TIMEOUT - 5 |
3.0.1-0 |
BDII_SITE_TIMEOUT |
The timeout value to be used with a site BDII. It is the time the BDII will wait when querying resource BDIIs or GRISs |
120 |
3.0.1-0 |
BDII_USER |
BDII user |
edguser |
4.0.5-1 |
CA_REPOSITORY |
APT repository containing the string to install the Certification Authorities files. Used by the TAR UI |
rpm http://linuxsoft.cern.ch/ LCG-CAs/current production |
3.0.1-0 |
CONFIG_GRIDMAPDIR |
It enables or disables the creation of the gridmap directory. |
yes (See more details in case you are configuring GLEXEC_wn) |
4.0.11-1 |
CONFIG_USERS |
The creation of groups and users needed by the middleware is done by YAIM. The default value is yes . If you want to disable this functionality set it to no . You must ensure the users and groups defined in $INSTALL_ROOT/glite/yaim/examples/edgusers.conf are created in your system. For the VO pool accounts, YAIM provides also an example file in INSTALL_ROOT/glite/yaim/examples/users.conf . Even if you create your own users, you must provide a similar file that will be used to create the gridmap file. |
yes |
4.0.5-1 |
CRON_DIR |
Directory where YAIM writes all the cron jobs |
/etc/cron.d |
3.0.1-0 |
DPMMGR_USER |
DPM user |
dpmmgr |
4.0.5-1 |
DPMMGR_GROUP |
DPM user group |
dpmmgr |
4.0.5-1 |
EDG_WL_SCRATCH |
Optional scratch directory for jobs |
"" |
3.0.1-0 |
EDG_USER |
edg user |
edguser |
4.0.5-1 |
EDG_GROUP |
edg user group |
edguser |
4.0.5-1 |
EDG_HOME_DIR |
edg user home directory. Note: It is recommendable to use /var/lib/user_name as HOME directory for system users. |
/home/edguser |
4.0.10-1 |
EDGINFO_USER |
edginfo user |
edginfo |
4.0.5-1 |
EDGINFO_GROUP |
edginfo user group |
edginfo |
4.0.5-1 |
EDGINFO_HOME_DIR |
edg user home directory. Note: It is recommendable to use /var/lib/user_name as HOME directory for system users. |
/home/edginfo |
4.0.10-1 |
FQANVOVIEWS |
If set to yes yaim will configure the infosystem to publish the CE VOViews also for groups maped/identified by a VOMS FQANS. If set to no then only the VO VOViews will be published. If you want to know more on how the publication mechanism of VOViews works read this . |
no |
4.0.4-1 |
FUNCTIONS_DIR |
The directory where YAIM functions are stored |
/opt/glite/yaim/functions |
3.0.1-0 |
GIP_CACHE_TTL |
How long information in the cache is valid. |
300 |
3.0.1-0 |
GIP_FRESHNESS |
If the information from the plug-ins is within this timelimit, the dynamics plug-ins will not be executed. |
60 |
3.0.1-0 |
GIP_RESPONSE |
How long the GIP will wait for dynamic plug-ins to run before reading the information from the cache. |
$BDII_SITE_TIMEOUT - 5 |
3.0.1-0 |
GIP_TIMEOUT |
The timeout value to be used with dynamic plug-ins. |
150 |
3.0.1-0 |
GLITE_USER |
glite user |
glite |
4.0.5-1 |
GLITE_GROUP |
glite user group |
glite |
4.0.5-1 |
GLITE_HOME_DIR |
glite user home directory |
/home/glite |
4.0.5-1 |
GLOBUS_TCP_PORT_RANGE |
Port range for Globus IO. It should be specified as "num1,num2". YAIM automatically handles the syntax of this variable depending on the version of VDT. If it's VDT 1.6 it leaves "num1,num2". If it's a version < VDT 1.6 it changes to "num1 num2" |
"20000,25000" |
3.0.1-0 |
GRIDFTP_CONNECTIONS_MAX |
Maximum number of simultaneous connections to the gridftp server. This default is increased to 150 in yaim core >= 4.0.10-1. It's actually recommendable to increase this variable to a number 2/3 times higher than its default of 50 for yaim core <= 4.0.6-1 |
150 |
4.0.6-1 |
INSTALL_ROOT |
Installation root - change if using the re-locatable distribution. |
/opt |
3.0.1-0 |
INFOSYS_GROUP |
Information system user group |
infosys |
4.0.5-1 |
JAVA_LOCATION |
Path to Java VM installation. It can be used in order to run a different version of java installed locally. WARNING! This variable will dissappear soon |
/usr/java/j2sdk1.4.2_12 |
Deprecated in glite-yaim-core >= 4.0.8-1 |
LCMAPS_DEBUG_LEVEL |
LCMAPS debugging level |
0 |
4.0.1-4 |
LCMAPS_LOG_LEVEL |
LCMAPS logging level |
1 |
4.0.1-4 |
LCAS_DEBUG_LEVEL |
LCAS debugging level |
0 |
4.0.1-4 |
LCAS_LOG_LEVEL |
LCAS logging level |
1 |
4.0.1-4 |
LCG_REPOSITORY |
APT repository for the EGEE middleware. This is only for gLite 3.0 that uses apt . For gLite 3.1, please check this link. |
'rpm http://glitesoft.cern.ch/EGEE/gLite/APT/R3.0/ rhel30 externals Release3.0 updates' |
3.0.1-0 |
LFCMGR_USER |
LFC user |
lfcmgr |
4.0.5-1 |
LFCMGR_GROUP |
LFC user group |
lfcmgr |
4.0.5-1 |
MY_DOMAIN |
The site's domain name. |
hostname -d |
3.0.1-0 |
ORACLE_LOCATION |
The location of the oracle client libraries |
/usr/lib/oracle/10.2.0.3 |
3.0.1-0 (value updated in 4.0.6-1) |
OUTPUT_STORAGE |
Default Output directory for the jobs. |
/tmp/jobOutput |
3.0.1-0 |
REG_HOST |
RGMA Registry hostname. |
lcgic01.gridpp.rl.ac.uk |
3.0.1-0 |
REPOSITORY_TYPE |
It can only be apt . Moreover, this is only valid for gLite 3.0. For gLite 3.1 please visit this link |
apt |
3.0.1-0 |
RGMA_USER |
RGMA user |
rgma |
4.0.5-1 |
RGMA_GROUP |
RGMA user group |
rgma |
4.0.5-1 |
SE_ARCH |
"disk, tape, multidisk, other" It populates GlueSEArchitecture. |
multidisk |
3.0.1-0 |
TOMCAT_USER |
tomcat user |
tomcat |
4.0.8-1 |
TRUSTMANAGER_CRL_UPDATE_INTERVAL |
This variable is used in the trustmanager configuration and it defines how often the X509_CERT_DIR is polled for changes in the files. It's a number followed by h,m or s time units. |
2h |
4.0.8-1 |
UNPRIVILEGED_MKGRIDMAP |
Note that this variable should be specified per VO in yaim core >= 4.0.10-1 !!! In case you want to create a grid-map file which only contains mappings to ordinary users. no will create a grid-map file with special users as well, if defined in groups.conf. yes , will create a grid-mapfile containing only mappings to ordinary pool accounts. |
no |
4.0.6-1 |
YAIM_LOGGING_LEVEL |
The logging level to print debugging information. Possible values are NONE, ABORT, ERROR, WARNING, INFO, DEBUG. |
INFO |
3.0.1-0 |
site-info.post
Variable name |
Description |
Default value |
YAIM version >= |
CA_CERTIFICATES_DIR |
path |
${X509_CERT_DIR} |
4.0.1-4 |
EDGUSERS |
edg users configuration file. If you disable YAIM user configuration, make sure you add these users and groups in your system. The format of this file is: user:id:group:gip:description:home. More details can be found in /opt/glite/yaim/defaults/edgusers.conf.README |
$INSTALL_ROOT/glite/yaim/examples/edgusers.conf |
4.0.5-1 |
DN_GRIDMAPFILE |
Used only by the 3.1 lcg CE to generate the DN gridmap file |
${X509_CERT_DIR}/etc/grid-security/dn-grid-mapfile |
4.0.5-5 |
GRID_ENV_LOCATION |
Location of the grid-env.(c)sh file |
$INSTALL_ROOT/glite/etc/profile.d |
4.0.3-6 |
GRIDMAPFILE |
path of the gridmap file |
${X509_CERT_DIR}/etc/grid-security/grid-mapfile |
4.0.5-5 |
GRIDMAPDIR |
path of the gridmap directory |
${X509_CERT_DIR}/etc/grid-security/gridmapdir |
4.0.5-5 |
GROUPMAPFILE |
path of the LCMAPS groupmap file |
${X509_CERT_DIR}/etc/grid-security/groupmapfile |
4.0.5-5 |
GLITE_LOCATION |
- |
$INSTALL_ROOT/glite |
4.0.3-6 |
GLITE_LOCATION_VAR |
- |
$GLITE_LOCATION/var |
4.0.3-6 |
GLITE_LOCATION_LOG |
- |
$GLITE_LOCATION/var/log |
4.0.3-6 |
GLITE_LOCATION_TMP |
- |
$GLITE_LOCATION}/tmp |
4.0.3-6 |
GLOBUS_LOCATION |
- |
$INSTALL_ROOT/globus |
4.0.3-6 |
TOMCAT_HOSTCERT_LOCATION |
path of the tomcat host certificate |
/etc/grid-security/tomcat-cert.pem |
4.0.8-1 |
TOMCAT_HOSTKEY_LOCATION |
path of the tomcat host key |
/etc/grid-security/tomcat-key.pem |
4.0.8-1 |
VOMS_GRIDMAPFILE |
Used only by the 3.1 lcg CE to generate the VOMS FQANS file |
${X509_CERT_DIR}/etc/grid-security/voms-grid-mapfile |
4.0.5-5 |
X509_CERT_DIR |
path of the trusted CA files |
${GRID_ENV_LOCATION}/etc/grid-security/certificates/ |
4.0.5-5 |
X509_HOST_CERT |
path of the host certificate |
${GRID_ENV_LOCATION}/etc/grid-security/hostcert.pem |
4.0.5-5 |
X509_HOST_KEY |
path of the host key |
${GRID_ENV_LOCATION}/etc/grid-security/hostkey.pem |
4.0.5-5 |
X509_VOMS_DIR |
path of the voms trusted servers |
${GRID_ENV_LOCATION}/etc/grid-security/vomsdir/ |
4.0.5-5 |
YAIM_LOG |
Location of the YAIM log file |
$INSTALL_ROOT/glite/yaim/log/yaimlog |
4.0.3-6 |
Service configuration variables
NOTE: Some yaim modules have started to distribute node specific variables that in some cases used to be part of site-info.def. This documentation already describes the situation where all node specific variables are distributed by the corresponding yaim module. Remember that this is not yet the current situation for all yaim modules, so maybe some of the files described here do not exist yet in the yaim module.
In order to configure a service you would need to define some variables distributed in different files. You can define these variables in your
site-info.def
or leave them under
siteinfor/services
directory, where
siteinfo
is the directory where your
site-info.def
is located. Default variables can be redefined as well in one of the two locations.
The files where the variables are located are:
- Mandatory general variables: sys admins must define these variables. They can be found in
opt/glite/yaim/examples/siteinfo/site-info.def
and are described in the previous section site-info.def variables.
- Mandatory service specific variables: sys admins must define these variables. They can be found in
opt/glite/yaim/examples/siteinfo/services/node-type
and are described in the following sections.
- Default general variables: sys admins don't need to define these variables unless they want a specific value for their site which is different from the default one. They can be found in
opt/glite/yaim/defaults/site-info.pre or post
and are described in the previous section site-info.pre variables and site-info.post.
- Default service specific variables: sys admins don't need to define these variables unless they want a specific value for their site which is different from the default one. They can be found in
opt/glite/yaim/defaults/node-type.pre or post
and are described in the following sections.
All the services need to have
INSTALL_ROOT
defined. This variable is always defined in
site-info.pre
and defaults to
/opt
.
AMGA
AMGA oracle
- Mandatory general variables
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-amga_oracle
.
Variable Name |
Description |
Value type |
Version |
AMGA_ROOT_USER_DN |
The root's DN that will authenticate the root user |
String |
4.0.2-1 |
AMGA_TEST_USER_DN |
The test_user's DN that will authenticate the SAM tests' user |
String |
4.0.2-1 |
AMGA_DB_USERNAME |
DB user name |
String |
4.0.3-3 |
AMGA_DB_PASSWORD |
DB user password |
String |
4.0.3-3 |
AMGA_ODBC_DATA_SOURCE |
Data source name |
String |
4.0.3-3 |
AMGA_ORACLE_TNS_ADMIN_PATH |
The path in which the TNSNAMES.ora file exists |
path |
4.0.3-3 |
AMGA_ORACLE_CONNECTION_STRING |
The connection string to use for Oracle server with te sqlplus command. Example oracleuser/secret_passowrd@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle.grid.ucy.ac.cy)(PORT=1521))(CONNECT_DATA=(SID=orcl))) |
connection string |
4.0.3-3 |
AMGA postgres
- Mandatory general variables
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-amga_postgres
.
Variable Name |
Description |
Value type |
Version |
AMGA_ROOT_USER_DN |
The root's DN that will authenticate the root user |
String |
4.0.2-1 |
AMGA_TEST_USER_DN |
The test_user's DN that will authenticate the SAM tests' user |
String |
4.0.2-1 |
APEL
- Mandatory general variables
-
APEL_DB_PASSWORD
-
CE_HOST
-
MON_HOST
-
MYSQL_PASSWORD
-
SITE_NAME
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-apel.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
APEL_PUBLISH_USER_DN |
If set to "yes" it will enable UserDN encryption |
String |
no |
4.0.2-2 |
APEL_PUBLISH_LIMIT |
Number of records that APEL will select in one go. The value of should be adjusted according to the memory assigned to the Java VM. In general, for 512Mb the number of records should be 150000 and for 1024Mb around 300000. The default value that is included in the APEL code is 300000, as the default memory is 1024Mb. |
number |
300000 |
4.0.2-7 |
MYSQL_HOST |
The name of the host where the mysql server is located |
hostname |
localhost |
4.0.2-6 |
MYSQL_REMOTE_USER |
The name of the user for access to the remote MySQL server |
user name |
root |
4.0.2-6 |
ARGUS
- Mandatory general variables
-
GROUPS_CONF
-
USERS_CONF
-
VOS
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-authz_server
.
Variable Name |
Description |
Value type |
Version |
ARGUS_HOST |
Hostname of the Argus node. |
FQDN Hostname |
1.1.0-1 |
PAP_ADMIN_DN |
User certificate DN of the user that will be the PAP administrator. |
Certificate DN |
1.0.0-1 |
- Default service specific variables: they can be found in
/opt/glite/yaim/default/glite-authz_server.pre/post
:
Variable Name |
Description |
Value type |
Default Value |
Version |
CONFIG_PAP |
Set this variable to 'no' if you don't want yaim to create the pap_configuration.ini file |
string |
yes |
1.0.0-1 |
CONFIG_PDP |
Set this variable to 'no' if you don't want yaim to create the pdp.ini file |
string |
yes |
1.0.0-1 |
CONFIG_PEP |
Set this variable to 'no' if you don't want yaim to create the pepd.ini file |
string |
yes |
1.0.0-1 |
PAP_HOME |
Home directory of the pap service |
path |
${PAP_HOME:-${INSTALL_ROOT}/argus/pap} |
1.0.0-1 |
PAP_ENTITY_ID |
This is a unique identifier for the PAP. It must be a URI (URL or URN) and the same entity ID should be used for all PAP instances that make up a single logical PAP. If a URL is used it doesn't neet to resolve to any specific webpage. |
URI |
${PAP_ENTITY_ID:-"http://${ARGUS_HOST}/pap"} |
1.1.0-1 |
PAP_HOST |
Set this variable to another value if PAP_HOST is not installed in the same host as PDP and PEP. |
IP address |
127.0.0.1 |
1.0.0-1 |
PAP_CONF_INI |
Configuration file for the pap service |
path |
${PAP_CONF_INI:-${PAP_HOME}/conf/pap_configuration.ini} |
1.0.0-1 |
PAP_AUTHZ_INI |
Configuration file for the pap service authorization policies |
path |
${PAP_AUTHZ_INI:-${PAP_HOME}/conf/pap_authorization.ini} |
1.0.0-1 |
PAP_REPO_LOCATION |
Path to the repository directory |
path |
${PAP_REPO_LOCATION:-${PAP_HOME}/repository} |
1.0.0-1 |
PAP_POLL_INTERVAL |
The polling interval (in seconds) for retrieving remote policies |
number |
14400 |
1.0.0-1 |
PAP_ORDERING |
Comma separated list of pap aliases. Example: alias-1, alias-2, ..., alias-n. Defines the order of evaluation of the policies of the paps, that means that the policies of pap "alias-1" are evaluated for first, then the policies of pap "alias-2" and so on. |
string |
default |
1.0.0-1 |
PAP_CONSISTENCY_CHECK |
Forces a consistency check of the repository at startup. |
boolean |
false |
1.0.0-1 |
PAP_CONSISTENCY_CHECK_REPAIR |
if set to true automatically fixes problems detected by the consistency check (usually means deleting the corrupted policies). |
boolean |
false |
1.0.0-1 |
PAP_PORT |
PAP standalone service port |
port |
8150 |
1.0.0-1 |
PAP_SHUTDOWN_PORT |
PAP standalone shutdown service port |
port |
8151 |
1.0.0-1 |
PAP_SHUTDOWN_COMMAND |
PAP standalone shutdown command (password) |
port |
generated pseudo random |
1.1.0-1 |
PDP_HOME |
Home directory of the pdp service |
path |
${PDP_HOME:-${INSTALL_ROOT}/argus/pdp} |
1.0.0-1 |
PDP_CONF_INI |
Configuration file for the PDP service |
path |
${PDP_CONF_INI:-${PDP_HOME}/conf/pdp.ini} |
1.0.0-1 |
PDP_ENTITY_ID |
This is a unique identifier for the PEP. It must be a URI (URL or URN) and the same entity ID should be used for all PEP instances that make up a single logical PEP. If a URL is used it need not resolve to any specific webpage. |
URI |
${PDP_ENTITY_ID:-"http://${ARGUS_HOST}/pdp"} |
1.1.0-1 |
PDP_HOST |
Set this variable to another value if PDP_HOST is not installed in the same host as PAP and PEP. |
IP address |
127.0.0.1 |
1.0.0-1 |
PDP_PORT |
PDP standalone service port |
port |
8152 |
1.0.0-1 |
PDP_ADMIN_PORT |
PDP admin service port |
port |
8153 |
1.1.0-1 |
PDP_ADMIN_PASSWORD |
PDP admin service password for shutdown, reload policy, ..., commands |
port |
PSEUDO_RANDOM |
1.1.0-1 |
PDP_RETENTION_INTERVAL |
The number of minutes the PDP will retain (cache) a policy retrieved from the PAP. After this time is passed the PDP will again call out to the PAP and retrieve the policy |
number |
240 |
1.0.0-1 |
PDP_PAP_ENDPOINTS |
Space separated list of PAP endpoint URLs for the PDP to use. Endpoints will be tried in turn until one returns a successful response. This provides limited failover support. If more intelligent failover is necessary or load balancing is required, a dedicated load-balancer/failover appliance should be used. |
URLs |
${PDP_PAP_ENDPOINTS:-"https://${PAP_HOST}:8150/pap/services/ProvisioningService"} |
1.1.0-1 |
PEP_HOME |
Home directory for the pep service |
path |
${PEP_HOME:-${INSTALL_ROOT}/argus/pepd} |
1.0.0-1 |
PEP_CONF_INI |
Configuration for the pep service |
path |
${PEP_CONF_INI:-${PEP_HOME}/conf/pepd.ini} |
1.0.0-1 |
PEP_ENTITY_ID |
This is a unique identifier for the PEP. It must be a URI (URL or URN) and the same entity ID should be used for all PEP instances that make up a single logical PEP. If a URL is used it need not resolve to any specific webpage. |
URI |
${PEP_ENTITY_ID:-"http://${ARGUS_HOST}/pepd"} |
1.1.0-1 |
PEP_HOST |
Set this variable to another value if PEP_HOST is not installed in the same host as PAP and PDP. But remember to use the hostname and not 127.0.0.1 ! |
hostname |
${ARGUS_HOST} |
1.1.0-1 |
PEP_PORT |
PEP service port |
port |
8154 |
1.0.0-1 |
PEP_ADMIN_PORT |
PEP admin service port |
port |
8155 |
1.1.0-1 |
PEP_ADMIN_PASSWORD |
PEP admin service password for shutdown, clear cache, ..., commands |
port |
generated pseudo random |
1.1.0-1 |
PEP_MAX_CACHEDRESP |
The maximum number of responses from any PDP that will be cached. Setting this value to 0 (zero) will disable caching. |
number |
500 |
1.0.0-1 |
PEP_PDP_ENDPOINTS |
Space separated list of PDP endpoint URLs for the PEP to use. Endpoints will be tried in turn until one returns a successful response. This provides limited failover support. If more intelligent failover is necessary or load balancing is required, a dedicated load-balancer/failover appliance should be used. |
URLs |
${PEP_PDP_ENDPOINTS:-"https://${PDP_HOST}:8152/authz"} |
1.1.0-1 |
BDII
site BDII
- Mandatory general variables
-
CE_HOST
-
SITE_BDII_HOST
-
SITE_EMAIL
-
SITE_LAT
-
SITE_LONG
-
SITE_NAME
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/services/siteinfo/glite-bdii_site
:
Variable Name |
Description |
Value type |
Version |
BDII_REGIONS |
List of host identifiers publishing information to the BDII. For each item listed in the BDII_REGIONS variable you need to create a BDII_<host-id>_URL variable</host-id> |
node-type name |
3.0.1-0 |
BDII_<host-id>_URL |
URL of the information producer (e.g. BDII_host1_URL="ldap://host1_hostname:2170/mds-vo-name=resource,o=grid" . Where host1 is a host where several node types may be installed, for example a lcg CE and a site BDII. It's therefore not necessary to create one variable per node type, but per host) |
URL(*) |
3.0.1-0 |
SITE_DESC |
Long format Name of your site |
"A long format name of your site" |
glite-yaim-bdii 4.0.4-2 |
SITE_LOC |
Location of the site BDII |
"City, Country" |
3.0.1-0 |
SITE_OTHER_GRID |
Grid to which your site belongs to, i.e. WLCG or EGI. Use | to separate values. |
Grid project name |
glite-yaim-bdii 4.0.4-4 |
SITE_OTHER_* |
For more details, please visit https://wiki.egi.eu/wiki/MAN1_How_to_publish_Site_Information |
SITE_OTHER_GRID="WLCG|EGI" |
glite-yaim-bdii 4.0.4-2 |
SITE_SECURITY_EMAIL |
Contact email for security |
e-mail address |
glite-yaim-bdii 4.0.4-2 |
SITE_SUPPORT_EMAIL |
The site user support e-mail address as published by the information system |
e-mail address |
3.0.1-0 |
SITE_SUPPORT_SITE |
Support entry point. Unique Id for the site in the GOC DB and information system |
my-bigger-site.their-domain |
Deleted >= glite-yaim-bdii 4.0.4-2 |
SITE_TIER |
Site tier |
URL |
Deleted >= glite-yaim-bdii 4.0.4-2 |
SITE_WEB |
Site web site |
TIER 1 or TIER 2 |
3.0.1-0 |
(*) The URL is in the form of
ldap://information_producer_host_name:2170/mds-vo-name=resource,o=grid
.
top BDII
- Mandatory general variables
- Default service specific variables: they can be found in in
/opt/glite/yaim/defaults/glite-bdii_top.pre
:
Please, check this
twiki page for more details.
- Mandatory service specific variables: They can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-cluster
:
The new variable names follow this syntax:
- In general, in variables based on hostnames, queues or VOViews the characters '.' and '-' must each be replaced with '_'
- <host-name>: identifier that corresponds to the CE hostname in lower case. Example: ctb-generic-1.cern.ch -> ctb_generic_1_cern_ch
- <cluster-identifier>: identifier that corresponds to the cluster identifier in upper case. Example: my_cluster -> MY_CLUSTER
- <subcluster-identifier>: identifier that corresponds to the subcluster identifier in upper case. Example: my_subcluster -> MY_SUBCLUSTER
Most of these variables correspond to variables that are defined for a CE when it is deployed in non-cluster mode. Please refer to the description of the old variable where applicable as shown in the table below.
Variable Name |
Description |
Value type |
Version |
CE_HOST_<host-name>_CE_TYPE |
CE type: 'jobmanager' for lcg CE and 'cream' for cream CE |
string |
glite-yaim-cluster 1.0.0-2 |
CE_HOST_<host-name>_CE_InfoJobManager |
The name of the job manager used by the CE. This variable has been renamed in the new infosys configuration. The old variable name was: JOB_MANAGER . Please define as: pbs, lcgpbs, lsf or lcglsf, etc. |
string |
glite-yaim-cluster 1.0.0-2 |
CE_HOST_<host-name>_QUEUES |
Space separated list of the queue names configured in the CE. |
string |
glite-yaim-cluster 1.0.0-2 |
CLUSTER_HOST |
hostname where the cluster is configured |
hostname |
glite-yaim-cluster 1.0.0-2 |
CLUSTERS |
Space separated list of your cluster identifiers, Ex. ="cluster1 [cluster2 [...]]". The identifiers are only used within yaim configuration files. |
string list |
glite-yaim-cluster 1.0.0-1 |
CLUSTER_<cluster-identifier>_CLUSTER_UniqueID |
Cluster UniqueID. It may contain lowercase alphanumeric characters, dot, dash and underscore only. It must be globally unique, for instance base it on the DNS domain. |
string |
glite-yaim-cluster 1.0.0-1 |
CLUSTER_<cluster-identifier>_CLUSTER_Name |
Cluster human readable name |
string |
glite-yaim-cluster 1.0.0-1 |
CLUSTER_<cluster-identifier>_SITE_UniqueID |
Site name where the cluster belongs to. It should be consistent with your variable SITE_NAME. NOTE: This may be changed to SITE_UniqueID when the GlueSite is configured with the new infosys variables |
string |
glite-yaim-cluster 1.0.0-1 |
CLUSTER_<cluster-identifier>_CE_HOSTS |
Space separated list of CE hostnames configured in the cluster |
hostname list |
glite-yaim-cluster 1.0.0-1 |
CLUSTER_<cluster-identifier>_SUBCLUSTERS |
Space separated list of your subcluster identifiers, Ex="subcluster1 [subcluster2 [...]]". The identifiers are only used within yaim configuration files. |
string list |
glite-yaim-cluster 1.0.0-1 |
COMPUTING_SERVICE_ID |
The Glue2 computing service id |
String |
glite-yaim-cluster 2.1.0-3 |
SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_UniqueID |
Subcluster UniqueID. It may contain lowercase alphanumeric characters, dot, dash and underscore only. It must be globally unique within all Subcluster UniqueIDs, for instance base it on the DNS domain to ensure it will not collide with an ID at another site. Typically if a cluster will only have one subcluster the Subcluster UniqueID may be set to be the same as the Cluster UniqueID. |
string |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ApplicationSoftwareRunTimeEnvironment |
"sw1 [| sw2 [| ...]" old CE_RUNTIMEENV |
string list |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ArchitectureSMPSize |
old CE_SMPSIZE |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ArchitecturePlatformType |
old CE_OS_ARCH |
string |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_BenchmarkSF00 |
old CE_SF00 |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_BenchmarkSI00 |
old CE_SI00 |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_MainMemoryRAMSize |
old CE_MINPHYSMEM |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_MainMemoryVirtualSize |
old CE_MINVIRTMEM |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_NetworkAdapterInboundIP |
old CE_INBOUNDIP |
boolean |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_NetworkAdapterOutboundIP |
old CE_OUTBOUNDIP |
boolean |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_OperatingSystemName |
old CE_OS |
OS name |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_OperatingSystemRelease |
old CE_OS_RELEASE |
OS release |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_OperatingSystemVersion |
old CE_OS_VERSION |
OS version |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ProcessorClockSpeed |
old CE_CPU_SPEED |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ProcessorModel |
old CE_CPU_MODEL |
string |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ProcessorOtherDescription |
old CE_OTHERDESCR |
string |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_HOST_ProcessorVendor |
old CE_CPU_VENDOR |
string |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_Name |
subcluster human readable name |
string |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_PhysicalCPUs |
old CE_PHYSCPU |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_LogicalCPUs |
old CE_LOGCPU |
number |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_TmpDir |
tmp directory |
path |
glite-yaim-cluster 1.0.0-1 |
SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_WNTmpDir |
WN tmp directory |
path |
glite-yaim-cluster 1.0.0-1 |
WORKING_AREA_FREE |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_GUARANTEED |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_LIFETIME |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_MULTISLOT_FREE |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_MULTISLOT_LIFETIME |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_MULTISLOT_TOTAL |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_SHARED |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
WORKING_AREA_TOTAL |
To set the relevant attribute for the Glue2 ComputingManager object |
glite-yaim-cluster 2.1.0-3 |
CONDOR
CONDOR server
The Condor Server is a
glite-WN configured as a Batch Server for condor. The list of variables needed to configure it are:
- Mandatory general variables: with the following meaning for CONDOR:
Variable Name |
Description |
Value type |
Example |
BATCH_SERVER |
Hostname of the condor head node |
String |
condor.pic.es |
BATCH_VERSION |
Condor release number |
Number |
7.2.4 |
BATCH_BIN_DIR |
Directory of the condor commands |
String |
$BATCH_LOCATION/bin |
BATCH_SBIN_DIR |
Directory of the condor administrative commands |
String |
$BATCH_LOCATION/sbin |
CONDOR client
The Condor Client is a Worker Node configured as an Executer for condor. You need the same variables used for the
Condor Server configuration.
CONDOR Utils
The Condor Utils is a
lcg-CE or
creamCE configured as Job submitter for condor. It also provides the information to be published via the site BDII and parses the accounting data to the APEL database located at the MON_HOST. The list of variables needed to configure it are:
Variable Name |
Description |
Value type |
Example |
TYPECONDORCONF |
Two possible values («tight» or «loose»). «tight» means YAIM will try to configure Condor suitably. «loose» means that Condor configuration will NOT be touched and left entirely to the admin |
String |
loose |
APEL_DB_PASSWORD |
Passphrase for the APEL DB |
String |
;-) |
BATCH_BIN_DIR |
Directory of the condor commands |
String |
$BATCH_LOCATION/bin |
BATCH_SBIN_DIR |
Directory of the condor administrative commands |
String |
$BATCH_LOCATION/sbin |
BATCH_SERVER |
Hostname of the condor head node |
String |
condor.pic.es |
BATCH_VERSION |
Condor release number |
Number |
6.8.6 |
CE_HOST |
Hostname of the condor submitter |
String |
vce01.pic.es |
JOB_MANAGER |
Name of the Job Manager |
String |
condor |
MON_HOST |
Hostname of the location of the APEL DB |
String |
vrgma01.pic.es |
SITE_NAME |
Name of your site |
String |
PIC-SA3 |
VOS |
Names of the VOs to be supported by the pool |
List of strings |
"dteam ops" |
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/services/glite-condor-utils
.
NOTE: There are no queues in the "conventional" sense in condor. Set the variable QUEUES to the short hostname of the condor server (e.g. QUEUES=condor). Then set the variable ${QUEUES}_GROUP_ENABLE accordingly to your access policy to the condor pool, e. g. as you would do in PBS.
- Default general variables: with the following meaning for CONDOR:
Variable Name |
Description |
Value type |
Default Value |
APEL_DB |
DB for the accounting |
String |
accounting |
APEL_DB_USER |
User that writes into the APEL DB |
String |
$APEL_DB |
APEL_DB_URL |
URL of the accounting DB |
String |
"jdbc:mysql://$MON_HOST:3306/$APEL_DB_USER" |
NOTE: All the variables related to APEL you have to clarify with your DBA.
- List of daemons started by YAIM: (Results of the command ps aux | grep condor)
Condor Server
- condor_master
- condor_collector
- condor_negotiator
Condor Client
- condor_master
- condor_startd
Condor Utils
- condor_master
- condor_schedd
- List of cron jobs configured by YAIM
There is a daily logrotate running on the
Condor Utils node to prepare the accounting information taken from condor_history.
cream CE
For more details about the cream CE configuration, please check the
CREAM CE SystemAdminitrator guide
.
There is a
tool
that checks your cream CE configuration.
- Mandatory general variables
-
-
BATCH_LOG_DIR
(for Torque/PBS, this must be set to the directory "containing" the server_logs directory; usually /var/torque)
-
BATCH_SERVER
-
BDII_HOST
-
CE_BATCH_SYS
-
CE_CAPABILITY
(mandatory for yaim-cream-ce >= 4.0.9-3)
-
CE_CPU_MODEL
-
CE_CPU_SPEED
-
CE_CPU_VENDOR
-
CE_HOST
-
CE_INBOUNDIP
-
CE_LOGCPU
-
CE_MINPHYSMEM
-
CE_MINVIRTMEM
-
CE_OS
-
CE_OTHERDESCR
(mandatory for yaim-cream-ce >= 4.0.9-3)
-
CE_SMPSIZE
-
CE_OS_RELEASE
-
CE_OS_VERSION
-
CE_OS_ARCH
-
CE_OUTBOUNDIP
-
CE_PHYSCPU
-
CE_RUNTIMEENV
-
CE_SF00
-
CE_SI00
-
GROUPS_CONF
-
<queue-name>_GROUP_ENABLE</queue-name>
-
JOB_MANAGER
-
SE_LIST
-
SE_MOUNT_INFO_LIST
(mandatory for yaim-cream-ce >= 4.0.8-2)
-
USERS_CONF
-
QUEUES
-
VOS
-
VO_<vo-name>_SW_DIR
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMSES
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-creamce
:
Variable Name |
Description |
Value type |
Version |
BATCH_CONF_DIR |
Path where lsf.conf is located. Only when configuring LSF as a batch system. |
String |
4.0.4-12 |
BLPARSER_HOST |
Fully qualified name of machine hosting the BLAH blparser |
String |
4.0.4-12 |
CEMON_HOST |
Fully qualified name of CEMon host (do not use localhost !) |
String |
4.0.4-12 |
CREAM_DB_USER |
Cream DB user name |
String |
4.0.4-12 |
CREAM_DB_PASSWORD |
CREAM DB password |
String |
4.0.9-2 |
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-creamce.pre
or /opt/glite/yaim/defaults/glite-creamce.post
:
Variable Name |
Description |
Value type |
Default Value |
Version |
ACCESS_BY_DOMAIN |
By default the cream DB is on localhost and it's accessible only from localhost. Setting this variable to true will allow all computers in your domain to access the cream DB |
String |
No |
4.0.7-0 |
BLAH_CHILD_POLL_TIMEOUT |
BLAH timeout |
Number |
200 |
4.0.7-2 |
BLAH_JOBID_PREFIX |
BLAH jobId prefix. It MUST be 6 chars long, begin with 'cr' and terminate with '_'. The other 3 characters must be alpha-numeric. It is important in case there's more than one CE using the same farm. In this case, it is suggested that each CREAM_CE has its own prefix |
String |
cream_ |
4.0.4-13 |
BLAH_JOBID_PREFIX_ES |
BLAH jobId prefix. It MUST be 6 chars long, begin with 'cr' and terminate with '_'. The other 3 characters must be alpha-numeric. It is important in case there's more than one CE using the same farm. In this case, it is suggested that each CREAM_CE has its own prefix |
String |
|
4.3.0-3 |
BLPARSER_WITH_UPDATER_NOTIFIER |
Specifies if the new blparser (set it to 'true') or the old one (set it to 'false' should be used) |
String |
false |
4.0.8-0 |
BLP_PORT |
Port where BLAH Blparser listens to |
Number |
333333 |
4.0.6-0 |
BUPDATER_LOOP_INTERVAL |
Used to set the value bupdater_loop_interval in blah.config. It specifies how often the batch system should be queried |
Number (secs) |
30 |
4.2.2-1 |
COMPUTING_SERVICE_ID |
The Glue2 computing service id |
String |
|
4.3.0-3 |
CREAM_CE_STATE |
Value to be published in GlueCEStateStatus (instead of Production) |
String |
Special |
4.0.5-0 |
CREAM_CONCURRENCY_LEVEL |
Used to set the value cream_concurrency_level in cream configuration file. It specifies the maximum number of blah instances that can be used in parallel to interact with the batch system |
Number |
50 |
4.2.3-1 |
CREAM_DATASOURCE_FACTORY |
CREAM Datasource Factory to be used |
String |
org.apache.commons.dbcp.BasicDataSourceFactory |
4.3.0-3 |
CREAM_DB_NAME |
Name of CREAM DB |
String |
creamdb |
4.3.0-3 |
CREAM_DB_HOST |
Hostname of machine hosting the CREAM DB |
String |
localhost |
4.1.0-0 |
CREAM_ES_DB_NAME |
Name of CREAM ES DB |
String |
esdb |
4.3.0-3 |
CREAM_ES_SANDBOX_PATH |
Sandbox path for CREAM EMI-ES |
String |
/var/cream_es_sandbox |
4.3.0-3 |
CREAM_JAVA_OPTS_HEAP |
JAVA_OPTS for HEAP |
String |
-Xms512m -Xmx2048 |
4.3.0-3 |
CREAM_PORT |
BLAH BLParser listening port |
Number |
56565 |
4.0.6-0 |
CREAM_SANDBOX_PATH |
Top directory for the job sandbox directories |
String |
/var/cream_sandbox |
4.1.1-1 |
CREAM_GLEXEC_USER_HOME |
glexec user home default |
String |
/home/glexec |
4.1.0-0 |
DEFAULT_QUEUE |
Default queue to be used if queue was not specified in EMI-ES ADL |
String |
|
4.3.0-3 |
DELEGATION_DB_NAME |
Name of Delegation DB |
String |
delegationcreamdb |
4.3.0-3 |
DELEGATION_ES_DB_NAME |
Name of Delegation ES DB |
String |
delegationesdb |
4.3.0-3 |
GLEXEC_CREAM_LCAS_CONFIG |
glexec lcas configuration file |
String |
${GLEXEC_CREAM_LCAS_DIR}/lcas-glexec.db |
4.1.0-0 |
GLEXEC_CREAM_LCAS_DIR |
directory for glexec lcas configuration file |
String |
${INSTALL_ROOT}/glite/etc/lcas |
4.1.0-0 |
GLEXEC_CREAM_LCASLCMAPS_LOG |
glexec lcas-lcmaps log file (used if GLEXEC_CREAM_LOG_DESTINATION is file) |
String |
${GLEXEC_CREAM_LOG_DIR}/lcas_lcmaps.log |
4.1.0-0 |
GLEXEC_CREAM_LCMAPS_CONFIG |
glexec lcmaps configuration file |
String |
${GLEXEC_CREAM_LCMAPS_DIR}/lcmaps-glexec.db |
4.1.0-0 |
GLEXEC_CREAM_LCMAPS_DIR |
directory for glexec lcmaps configuration file |
String |
${INSTALL_ROOT}/glite/etc/lcmaps |
4.1.0-0 |
GLEXEC_CREAM_LOG_DESTINATION |
Specifies where glexec logging should be done (syslog or file) |
String |
syslog |
4.1.0-0 |
GLEXEC_CREAM_LOG_DIR |
Glexec log files dir (used if GLEXEC_CREAM_LOG_DESTINATION is file) |
String |
/var/log/glexec |
4.1.0-0 |
GLEXEC_CREAM_LOG_FILE |
Glexec log files name (used if GLEXEC_CREAM_LOG_DESTINATION is file) |
String |
${GLEXEC_CREAM_LOG_DIR}/glexec.log |
4.1.0.0 |
GLEXEC_GROUP |
glexec group |
String |
glexec |
4.0.7-2 |
GLEXEC_USER |
glexec user |
String |
glexec |
4.0.7-2 |
LSFPROFILE_DIR |
directory where lsf.profile is installed |
String |
${BATCH_CONF_DIR} |
4.1.0-0 |
PBS_MULTIPLE_STAGING_DIRECTIVE |
Relevant when the batch system is a PBS implementation). If the value for the variable is 'yes', staging will be done using: "-W stagein=file1@host:source1,stagein=file2@host:source2" . If the value for the variable is 'no', staging will be done using: -W stagein="file1@host:source1,file2@host:source2" |
String |
yes |
4.1.2-0 |
QUEUE_xxx_CLUSTER_UniqueID |
The cluster uniqueid mapped to the specified queue |
String |
|
4.3.0-3 |
RESET_CREAM_DB_GRANTS |
If yes, yaim will remove any unneeded (for CREAM) and potential dangerous grants on the CREAM DB |
String |
yes |
4.0.9-2 |
SANDBOX_TRANSFER_METHOD_BETWEEN_CE_WN |
If the value for this variable is GSIFTP, the transfer of sandbox files between the CE node and the WN is done using gridftp. If instead the value for this variable is LRMS, such file transfer is done using the batch system staging capabilities |
String |
GSIFTP |
4.2.0-0 |
SUDO_CREAM_LOG_DESTINATION |
Specifies where sudo logging should be done (syslog or file) |
String |
syslog |
4.1.0-0 |
SUDO_CREAM_LOG_DIR |
sudo log files dir (used if SUDO_CREAM_LOG_DESTINATION is file) |
String |
/var/log |
4.1.0-0 |
SUDO_CREAM_LOG_FILE |
sudo log files name (used if GLEXEC_CREAM_LOG_DESTINATION is file) |
String |
${SUDO_CREAM_LOG_DIR}/sudo_cream.log |
4.1.0-0 |
USE_CEMON |
Specifies if cemon has to be deployed |
String |
false |
4.1.0-0 |
USE_EMI_ES |
Specifies if EMI-ES has to be deployed |
String |
false |
4.3.0-3 |
WORKING_AREA_FREE |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_GUARANTEED |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_LIFETIME |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_MULTISLOT_FREE |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_MULTISLOT_LIFETIME |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_MULTISLOT_TOTAL |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_SHARED |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
WORKING_AREA_TOTAL |
To set the relevant attribute for the Glue2 ComputingManager object |
4.3.0-3 |
- List of daemons started by YAIM: (Results of the command ps aux)
- /usr/sbin/slapd
- /opt/bdii/sbin/bdii-update
- bdii-fwd
- /opt/globus/sbin/globus-gridftp-server
- /opt/glite/bin/BLParserPBS
- /opt/glite/bin/blahpd
- /bin/sh /opt/glite/etc/glite-ce-cream/blah.sh
- /opt/glite/bin/glite-lb-logd
- /opt/glite/bin/glite-lb-interlogd
- tomcat5 server
- List of cron jobs configured by YAIM There are logrotation, poolaccount cleaning, crl fetching
dCache
For dCache configuration variables, please check the
dCache
web page.
DPM
As soon as the fix for
40625
is released, DPM specific variables will be found in the yaim dpm module.
DPM head mysql
- Mandatory general variables
-
BDII_HOST
-
DPM_HOST
-
GROUPS_CONF
-
MYSQL_PASSWORD
-
SE_LIST
-
SITE_EMAIL
-
SITE_NAME
-
SE_GRIDFTP_LOGFILE
-
USERS_CONF
-
VOS
-
VO_<vo-name>_SW_DIR
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMS_CA_DN
-
VO_<vo-name>_VOMSES
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/services/siteinfo/glite-se_dpm_mysql
:
Variable Name |
Description |
Value type |
Version |
DPMPOOL |
The DPM pool name |
Maximum 15 character long string |
3.0.1-0 |
DPM_FILESYSTEMS |
The DPM filesystems to mount |
Space separated list of file nodename:/path -like stings |
3.0.1-0 |
DPM_DB_USER |
The DPM DB user |
String |
3.0.1-0 |
DPM_DB_PASSWORD |
The DPM DB user's password |
String |
3.0.1-0 |
DPM_DB_HOST |
The DB host name |
FQDN |
3.0.1-0 |
DPM_INFO_USER |
The user name of the readonly DB account used by the information provider. |
String |
3.0.1-0 |
DPM_INFO_PASS |
The password of the DPM_INFO_USER |
String , no special character for the moment. |
3.0.1-0 |
DPM_XROOTD_SHAREDKEY |
Shared key that must be the same between head node and all disk servers if dpm-xrootd is installed |
String between 32 and 64 characters |
4.2.8 |
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-se_dpm_mysql.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
DPM_DB |
The DPM DB name |
String |
dpm_db |
3.0.1-0 |
DPNS_DB |
The DPNS DB name |
String |
cns_db |
3.0.1-0 |
DPNS_HOST |
The DPNS server |
FQDN |
$DPM_HOST |
3.0.1-0 |
DPNS_BASEDIR |
The base directory after /dpm . Change it if you have several DPM head node in the same domain, to ensure the uniform name space. Ex.: 1st head node then /dpm/cern.ch/home 2nd head node /dpm/cern.ch/home2 |
directory name |
home |
4.0.1-7 |
DPMFSIZE |
The default disk space allocated per file on a DPM node. |
Number followed by storage unit |
200M |
3.0.1-0 |
DPM_HTTPS |
Enable DPM's HTTPS acces |
yes or no |
no |
4.0.1-7 |
DPM_XROOTD |
Enable DPM's xROOTD access (obsolete) |
yes or no |
no |
4.0.1-7 to 4.2.7 |
DPM_XROOTD_NOGSI |
Enable DPM's xROOTD access without GSI authentication (obsolete) |
yes or no |
no |
4.0.1-7 to 4.2.7 |
RFIO_PORT_RANGE |
The port range used by RFIO operations |
Two space separated number |
20000 25000 |
3.0.1-0 |
DPM disk
- Mandatory general variables
-
BDII_HOST
-
DPM_HOST
-
GROUPS_CONF
-
SE_GRIDFTP_LOGFILE
-
SE_LIST
-
USERS_CONF
-
VOS
-
VO_<vo-name>_SW_DIR
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMS_CA_DN
-
VO_<vo-name>_VOMSES
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-se_dpm_disk
:
Variable Name |
Description |
Value type |
Version |
DPMPOOL |
The DPM pool name |
Maximum 15 character long string |
3.0.1-0 |
DPM_FILESYSTEMS |
The DPM filesystems to mount |
Space separated list of file nodename:/path -like stings |
3.0.1-0 |
DPM_XROOTD_SHAREDKEY |
Shared key that must be the same between head node and all disk servers if dpm-xrootd is installed |
String between 32 and 64 characters |
4.2.8 |
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-se_dpm_disk.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
DPM_DB |
The DPM DB name |
String |
dpm_db |
3.0.1-0 |
DPNS_DB |
The DPNS DB name |
String |
cns_db |
3.0.1-0 |
DPNS_HOST |
The DPNS server |
FQDN |
$DPM_HOST |
3.0.1-0 |
DPNS_BASEDIR |
The base directory after /dpm . Change it if you have several DPM head node in the same domain, to ensure the uniform name space. Ex.: 1st head node then /dpm/cern.ch/home 2nd head node /dpm/cern.ch/home2 |
directory name |
home |
4.0.1-7 |
DPMFSIZE |
The default disk space allocated per file on a DPM node. |
Number followed by storage unit |
200M |
3.0.1-0 |
DPM_HTTPS |
Enable DPM's HTTPS acces |
yes or no |
no |
4.0.1-7 |
DPM_XROOTD |
Enable DPM's xROOTD access (obsolete) |
yes or no |
no |
4.0.1-7 to 4.2.7 |
DPM_XROOTD_NOGSI |
Enable DPM's xROOTD access without GSI authentication (obsolete) |
yes or no |
no |
4.0.1-7 to 4.2.7 |
RFIO_PORT_RANGE |
The port range used by RFIO operations |
Two space separated number |
20000 25000 |
3.0.1-0 |
E2EMONIT
- Mandatory general variables
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/services/siteinfo/glite-e2emonit
:
- Default general variables
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-e2emonit.pre
:
FTA
For more details, please check:
FTS YAIM reference
- Mandatory general variables
-
BDII_HOST
-
FTS_HOST_ALIAS
-
GROUPS_CONF
-
MON_HOST
-
PX_HOST
-
SITE_NAME
-
USERS_CONF
-
VOS
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-fta2
:
Variable Name |
Description |
Value type |
Version |
FTA_MACHINES |
List of the placeholders for FTA agents. |
Space separated list of names, for example "ONE TWO THREE" |
3.1.0 |
=FTA_AGENTS_<place-holder>_HOSTNAME=</place-holder> |
Host name of the placeholder where place-holder is one of the identifiers defined in FTA_MACHINES |
hostname |
3.1.0 |
=FTA_AGENTS_<place-holder>=</place-holder> |
Name of agents running on the machine of the placeholder |
Space separated names, for example "DTEAM DESY-DESY" |
3.1.0 |
=FTA_<agent-name>=</agent-name> |
For each agent specified in FTA_AGENTS_<place-holder></place-holder> , the type of the agent. |
Possible values are URLCOPY , SRMCOPY , VOAGENT |
3.1.0 |
FTA_GLOBAL_DB_CONNECTSTRING |
The connect string to the DB. |
Ordinary DB connect string. |
3.1.0-1 |
FTA_GLOBAL_DB_USER |
The database user. |
User name |
3.1.0-1 |
FTA_GLOBAL_DB_PASSWORD |
The database user's password. |
Password. |
3.1.0-1 |
For a detailed list of optional parameters in FTA configuration, please check the links at the beginning of this section.
- Default general variables
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-fta2.pre
:
FTM
For more details, please check the
File Transfer Monitor wiki.
- Mandatory general variables
-
BDII_HOST
-
GROUPS_CONF
-
FTS_HOST_ALIAS
-
PX_HOST
-
USERS_CONF
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-ftm2
:
Variable Name |
Description |
Value type |
Version |
FTM_TYPES |
gridview report |
Space delimeted list of reporting you which to start, this is the full set of options |
4.0.0 |
FTM_REPORT_INSTANCE |
tiertwo-fts-ws.cern.ch |
The name of the FTS server you are reporting on. |
4.0.0 |
FTM_REPORT_PERIODS |
hourly daily weekly |
space separated list of reporting periods, note hourly can create considerable strain on a busy FTS |
4.0.0 |
FTM_DB_CONNECT |
Most likely the same as $FTA_GLOBAL_DB_CONNECTSTRING |
Oracle connection parameters. |
4.0.0 |
FTM_DB_USER |
Most likely the same as $FTA_GLOBAL_DB_USER |
Oracle connection parameters |
4.0.0 |
FTM_DB_PASS |
Most likely the same as $FTA_GLOBAL_DB_PASSWORD |
Oracle connection parameters |
4.0.0 |
GRIDVIEW_WSDL |
http://gvarch.cern.ch:8080/wsarch/services/WebArchiverAdv?wsdl |
The endpoint of the gridview webservice to publish to |
4.0.0 |
FTS_HOST_ALIAS |
The FTS host alias |
This variable is necessary for the services.xml file generation cron job. |
4.0.0 |
- Default general variables
FTS
For more details, please check:
FTS YAIM reference
- Mandatory general variables
-
BDII_HOST
-
FTS_HOST_ALIAS
-
GROUPS_CONF
-
MON_HOST
-
PX_HOST
-
SITE_NAME
-
USERS_CONF
-
VOS
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMS_CA_DN
-
VO_<vo-name>_VOMSES
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-fts2
:
Variable Name |
Description |
Value type |
Version |
FTS_DBURL |
URL of the FTS DB |
Ordinary jdbc connect string. |
3.1.0-1 |
FTS_DB_USER |
The FTS DB user |
A user name |
3.1.0-1 |
FTS_DB_USER_PASSWORD |
The FTS DB user's password |
Password |
3.1.0-1 |
- Default general variables
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-fts2.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
FTS_DB_TYPE |
The FTS DB type |
Currently only ORACLE is supported. |
oracle |
3.1.0-1 |
FTS_HOST_ALIAS |
Use this variable of you want to publish your FTS service into the BDII with a different name. |
Host name alias |
none |
3.1.0-1 |
GLEXEC_wn
- Mandatory general variables:
- Default general variables:
-
X509_VOMS_DIR
-
X509_CERT_DIR
-
GRIDMAPFILE
-
GROUPMAPFILE
-
CONFIG_USERS
-
EDG_GROUP
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-glexec_wn
:
Variable Name |
Description |
Value type |
Version |
GLEXEC_WN_OPMODE |
Define this variable to configure the operation mode of glexec in your WN. The possible values are: 1) setuid: it will actually enable glexec to do the identity change. 2) log-only : it won't do any identity change but the log files will show if the mapping was succesful or not. |
String |
4.0.5-1 |
GLEXEC_WN_SCAS_ENABLED |
Define this variable to configure glexec to work against a SCAS server. 'yes' means you want to use a SCAS server and therefore you need to define; 'no' means you don't want to use any SCAS server. See also the notes below. |
String |
4.0.5-1 |
GLEXEC_WN_ARGUS_ENABLED |
Define this variable to configure glexec as a PEP client (see the EGEE/AuthorizationFramework); 'yes' means use ARGUS, 'no' means do not use ARGUS. See also the notes below. |
String |
N/A |
SCAS_HOST |
SCAS server hostname. Define this variable if you want to configure glexec to work against a SCAS server. |
hostname |
4.0.5-1 |
SCAS_PORT |
SCAS port where the SCAS server is listening. Define this variable if you want to configure glexec to work against a SCAS server. |
port |
4.0.5-1 |
SCAS_ENDPOINTS |
complete URL of SCAS endpoint, e.g. https://scas1.example.com:8443 . Alternative to using SCAS_HOST and SCAS_PORT . Multiple values are allowed, separated by whitespace |
URL |
N/A |
ARGUS_PEPD_ENDPOINTS |
If glexec is a PEP client, define the PEPD endpoint with this variable. It currently supports only one endpoint and should be a URLs, e.g. https://argus1.example.com:8154/authz |
URL |
N/A |
- Optional service specific variables: they can be found in
/opt/glite/yaim/siteinfo/services/glite-glexec_wn
:
Variable Name |
Description |
Value type |
Version |
GLEXEC_USER_HOME |
Alternative home directory for glexec , used only when CONFIG_USERS==yes . |
path |
N/A |
GLEXEC_WN_INPUT_LOCK |
Method used for input proxy file locking; allowed values are flock, fcntl, disabled. Flock doesn't work on NFS, while fcntl may cause problems on older kernels. |
string |
N/A |
GLEXEC_WN_TARGET_LOCK |
Method used for target proxy file locking; see GLEXEC_WN_INPUT_LOCK . |
string |
N/A |
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-glexec_wn.pre or post
:
Variable Name |
Description |
Value type |
Default Value |
Version |
CONFIG_GRIDMAPDIR |
it disables the creation of the gridmap directory only when GLEXEC_WN_SCAS_ENABLED = yes . |
string |
no |
N/A |
GLEXEC_LOCATION |
installation root for the glexec software; set this if you have an alternate build and install location. |
path |
${GLITE_LOCATION} |
N/A |
GLEXEC_WN_CONFIG |
full path of the glexec.conf file; this file is written by YAIM. Make this the hardcoded value in your version of glexec |
path |
/opt/glite/etc/glexec.conf |
N/A |
GLEXEC_WN_LCASLCMAPS_LOG |
lcas/lcmaps log file |
path |
${GLEXEC_WN_LOG_DIR}/lcas_lcmaps.log |
4.0.5-1 |
GLEXEC_WN_LCAS_DEBUG_LEVEL |
lcas debug level |
number |
0 |
4.0.5-1 |
GLEXEC_WN_LCAS_DIR |
lcas configuration directory |
path |
${INSTALL_ROOT}/glite/etc/lcas |
4.0.5-1 |
GLEXEC_WN_LCAS_CONFIG |
lcas configuration file |
path |
${GLEXEC_WN_LCAS_DIR}/lcas-glexec.db |
4.0.5-1 |
GLEXEC_WN_LCAS_LOG_LEVEL |
lcas log level |
number |
1 |
4.0.5-1 |
GLEXEC_WN_LCMAPS_DEBUG_LEVEL |
lcmaps debug level |
number |
0 |
4.0.5-1 |
GLEXEC_WN_LCMAPS_DIR |
lcmaps configuration directory path |
path |
${INSTALL_ROOT}/glite/etc/lcmaps |
4.0.5-1 |
GLEXEC_WN_LCMAPS_CONFIG |
lcmaps configuration file |
path |
${GLEXEC_WN_LCMAPS_DIR}/lcmaps-glexec.db |
4.0.5-1 |
GLEXEC_WN_LCMAPS_LOG_LEVEL |
lcmaps log level |
number |
1 |
4.0.5-1 |
GLEXEC_WN_LOG_DIR |
Directory of the lcas and lcmaps log file. |
path |
/var/log/glexec |
4.0.5-1 |
GLEXEC_WN_LOG_FILE |
glexec log file. Define this variable if you have defined GLEXEC_WN_LOG_DESTINATION=file |
path |
${GLEXEC_WN_LOG_DIR}/glexec.log |
4.0.5-1 |
GLEXEC_WN_LOG_LEVEL |
glexec log level |
number |
0 |
N/A |
GLEXEC_WN_LOG_DESTINATION |
Optional variable to tell glexec where to send the glexec logging information. There are two values: 'syslog' and 'file'. The default is 'syslog'. The value 'syslog' puts all messages in the syslog and 'file' puts the messages in a file. Define this variable if you want to specify a file. For value 'file' define GLEXEC_WN_LOG_FILE as well. |
string |
syslog |
4.0.5-1 |
GLEXEC_WN_PEPC_RESOURCEID |
The resource id passed by the PEP client module to ARGUS. DO NOT CHANGE THIS PARAMETER. |
string |
http://authz-interop.org/xacml/resource/resource-type/wn |
N/A |
GLEXEC_WN_PEPC_ACTIONID |
The action id passed by the PEP client module to ARGUS. DO NOT CHANGE THIS PARAMETER. |
string |
http://glite.org/xacml/action/execute |
N/A |
PILOT_JOB_FLAG |
Flag used in users.conf and groups.conf to define the special pilot job accounts. |
string |
pilot |
4.0.5-1 |
GLEXEC_EXTRA_WHITELIST |
Comma-separated list of account names or pool account representations (e.g. ".ops") to add to the glexec user white list. |
string |
(empty) |
N/A |
GLEXEC_WN_USE_LCAS |
Flag to select the use of LCAS for authentication. |
yes/no |
no if SCAS or ARGUS is used, yes otherwise |
2.0.4 |
Notes on using SCAS and ARGUS
Although untypical, it is possible to configure both the SCAS and
ARGUS modules as back-ends for LCMAPS. The resulting configuration
will first do the callout to ARGUS, then SCAS. It may be useful,
e.g. if you want ARGUS to perform global banning and SCAS for account mapping.
In the following
examples we show entries for
users.conf
and
groups.conf
needed by the GLEXEC_wn cofiguration. We use
PILOT_JOB_FLAG=pilot
, but you can choose a different identifier. We've chosen the
dteam
VO but you should change it to the VOs you support.
Example of
users.conf
file where user accounts for pilot jobs are defined:
55401:pildtm01:1473,2688:dteampil,dteam:dteam:pilot
55402:pildtm02:1473,2688:dteampil,dteam:dteam:pilot
55403:pildtm03:1473,2688:dteampil,dteam:dteam:pilot
...
Example of
groups.conf
file where FQANs for pilot jobs are defined:
"/dteam/ROLE=pilot":::pilot:
Please, take into account that:
- User accounts for pilot jobs should be pool accounts since static accounts are not supported.
- The flag you define in
PILOT_JOB_FLAG
should be the same as the flag you specify in users.conf
and groups.conf
.
- For more information in
users.conf
please check the YAIM guide - users.conf section and for groups.conf
check YAIM guide - groups.conf section.
- Bear in mind that you need to contact your VO to know which FQAN is supported for pilot jobs. If you define role
pilot
in your configuration but this is not defined in the corresponding VO, it will be useless. This information should be part of the VO ID Card, otherwise please contact the VO.
HYDRA
For more information on the HYDRA service, please check
this link.
- Mandatory general variables
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-hydra
:
Variable Name |
Description |
Value type |
Version |
HYDRA_INSTANCES |
Space separated list of Hydra instance IDs for instances to be configured on this node |
String |
1.0.0-3 |
HYDRA_PEERS |
Space separated list of Hydra instances IDs for instances on other nodes |
String |
1.0.0-3 |
HYDRA_DBNAME_<local-instance-ID> |
DB name for local instance |
String |
1.0.0-3 |
HYDRA_DBUSER_<local-instance-ID |
DB user name for local instance |
String |
1.0.0-3 |
HYDRA_DBPASSWORD_<local-instance-ID> |
DB password for local instance |
String |
1.0.0-3 |
HYDRA_ADMIN_<local-instance-ID> |
Subject of Hydra admin who has full rights to the instance |
String |
1.0.0-3 |
HYDRA_CREATE_<instance-ID> |
FQAN(VO) the service belongs to (who may use the service) |
String |
1.0.0-3 |
HYDRA_HOST_<remote-instance-ID> |
Hostname of remote hydra peer |
String |
1.0.0-3 |
HYDRA_ID_<remote-instance-ID> |
Local host ID of remote hydra peer (usually 1) |
String |
1.0.0-3 |
Some configuration guidelines:
- Usually at least three different institutions in the same VO are needed for a production hydra installation
- For production use you should always configure only one local instance per (e.g.
HYDRA_INSTANCES=1
) server and confgiure the following variables for it
-
HYDRA_DBNAME
, HYDRA_DBUSER
, HYDRA_DBPASSWORD
, HYDRA_ADMIN
(can be e.g. admin user cert, or hostcert DN), HYDRA_CREATE
- Then you should configure (usually at least two) additional servers (e.g.
HYDRA_PEERS="2 3"
), with the following variables
-
HYDRA_CREATE
, HYDRA_HOST
and HYDRA_ID
- You can configure three different Hydra instance on one server, but this should be solely for testing purposes! In this case you configure three local instances (e.g.
HYDRA_INSTANCES="1 2 3"
)
- Configure these three instances like local instances
- Do not configure any
HYDRA_PEERS
- More configuration examples in the above linked to documentation
LB
- Mandatory general variables
- Default general variables
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-lb.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
GLITE_WMS_LCGMON_FILE |
log file for job monitoring |
path |
/var/glite/logging/status.log |
4.0.2-1 |
GLITE_LB_SUPER_USERS |
additional super-users or WMS nodes |
DN list, comma-separated |
empty |
4.2.2-2 |
GLITE_LB_WMS_DN |
Approved WMS nodes |
DN list, comma-separated |
empty |
4.3.9-1 |
Note: Use
GLITE_LB_SUPER_USERS
instead of
GLITE_LB_WMS_DN
in older versions.
lcg CE
- Mandatory general variables
-
BATCH_SERVER
-
BDII_HOST
-
CE_BATCH_SYS
-
CE_CAPABILITY
(mandatory for yaim-lcg-ce >= 4.0.5-4)
-
CE_OTHERDESCR
(mandatory for yaim-lcg-ce >= 4.0.5-4)
-
SE_MOUNT_INFO_LIST
(mandatory for yaim-lcg-ce >= 4.0.5-4)
-
GROUPS_CONF
-
<queue-name>_GROUP_ENABLE
-
JOB_MANAGER
-
MON_HOST
-
QUEUES
-
SE_LIST
-
USERS_CONF
-
VOS
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_SW_DIR
-
VO_<vo-name>_VOMS_CA_DN
-
VO_<vo-name>_VOMSES
- Also required for lcg-CE in non cluster mode (i.e. all lcg-CE <= 3.1.40)
-
CE_RUNTIMEENV
-
CE_SMPSIZE
-
CE_OS_ARCH
-
CE_SF00
-
CE_SI00
-
CE_MINPHYSMEM
-
CE_MINVIRTMEM
-
CE_INBOUNDIP
-
CE_OUTBOUNDIP
-
CE_OS
-
CE_OS_RELEASE
-
CE_OS_VERSION
-
CE_CPU_SPEED
-
CE_CPU_MODEL
-
CE_CPU_VENDOR
-
CE_PHYSCPU
-
CE_LOGCPU
- cluster mode with lcg-CE>=3.1.46 is selected by defining
- New Mandatory variables exist for the lcg-CE >= 3.1.46 when in cluster mode, although many of the
CE_
yaim variables above are no longer needed (but are set in new variables when configuring the glite-CLUSTER node). Variables which are required in cluster mode are described in the following paragraphs and one can also find lists of variables which are available in the example /opt/glite/yaim/examples/siteinfo/services/lcg-ce
When the lcg-CE is configured in cluster mode it will stop publishing information about clusters and subclusters. That information should be published by the glite-CLUSTER node type instead. The glite-CLUSTER may be installed on the same machine as the lcg-CE or on a different host. A new set of yaim variables has been defined for configuring the information which is still required by the lcg-CE in cluster mode. Follow the instructions below:
The new variable names follow this syntax:
- In general, variables based on hostnames, queues or VOViews containing '.' and '_' # should be transformed into '-'
- <host-name>: identifier that corresponds to the CE hostname in lower case. Example: ctb-generic-1.cern.ch -> ctb_generic_1_cern_ch
- <queue-name>: identifier that corresponds to the queue in upper case. Example: dteam -> DTEAM
- <voview-name>: identifier that corresponds to the VOView id in upper case. '/' and '=' should also be transformed into '_'. Example: /dteam/Role=admin -> DTEAM_ROLE_ADMIN
Variable Name |
Description |
Value type |
Version |
CE_HOST_<host-name>_CLUSTER_UniqueID |
UniqueID of the cluster the CE belongs to |
string |
glite-yaim-lcg-ce 4.0.5-1 |
CE_InfoApplicationDir |
Prefix of the experiment software directory in a site. This variable has been renamed in the new infosys configuration. The old variable name was: VO_SW_DIR . This parameter can be defined per CE, queue, site or voview. See /opt/glite/yaim/examples/siteinfo/services/lcg-ce for examples. |
string |
glite-yaim-lcg-ce 4.0.5-1 |
CE_CAPABILITY |
Is a space separated list, each item will be published as a GlueCECapability attribute. It must include a CPUScalingReferenceSI00 value and may also need to include Share values. It can be defined by CE, queue or site by adding the appropriate prefix to the variable name. See /opt/glite/yaim/examples/siteinfo/services/lcg-ce for an example of a queue specific setting. An example site wide value is also set in site-info.def. This should be edited, or commented out and alternate value(s) set in services/lcg-ce |
string |
glite-yaim-lcg-ce-5.0.3-1 |
The following variables will be distributed in the future in site-info.def since they affect other yaim modules. At this moment we are in a transition phase to migrate to the new variable names.
Variable Name |
Description |
Value type |
Version |
CE_HOST_<host-name>_CE_TYPE |
CE type: 'jobmanager' for lcg CE and 'cream' for cream CE |
string |
glite-yaim-lcg-ce 4.0.5-1 |
CE_HOST_<host-name>_QUEUES |
Space separated list of the queue names configured in the CE. This variable has been renamed in the new infosys configuration. The old variable name was: QUEUES |
string |
glite-yaim-lcg-ce 4.0.5-1 |
CE_HOST_<host-name>_QUEUE_<queue-name>_CE_AccessControlBaseRule |
Space separated list of FQANS and/or VO names which are allowed to access the queues configured in the CE. This variable has been renamed in the new infosys configuration. The old variable name was: _GROUP_ENABLE |
string |
glite-yaim-lcg-ce 4.0.5-1 |
CE_HOST_<host-name>_CE_InfoJobManager |
The name of the job manager used by the gatekeeper. This variable has been renamed in the new infosys configuration. The old variable name was: JOB_MANAGER . Please, define: lcgpbs, lcglsf, lcgsge or lcgcondor |
string |
glite-yaim-lcg-ce 4.0.5-1 |
JOB_MANAGER |
The old variable is still needed since config_jobmanager in yaim core hasn't been modified to use the new variable. To be done. |
string |
OLD variable |
When using yaim-core >= 4.0.13 the OLD variables
JOB_MANAGER,
_GROUP_ENABLE and
QUEUES will be set (or reset) to the values of the new replacement variables listed above. With prior versions the new and the old style need to both be set consistently.
- Optional service specific variables:
/opt/glite/yaim/examples/siteinfo/services/lcg-ce
:
Variable Name |
Description |
Value type |
Version |
GASS_CACHE_MARSHAL_LOG |
Log file for the gass cache |
path |
4.0.4-4 |
GATEKEEPER_DGAS_DIR |
DGAS path |
path |
4.0.4-4 |
GMA_LOG |
Log file for GMA |
path |
4.0.4-4 |
JOB_MANAGER_MARSHAL_LOG |
Log file for marshal job manager |
path |
4.0.4-4 |
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/lcg-ce.post
:
Variable Name |
Description |
Value type |
Default Value |
Version |
CE_ImplementationVersion |
The version of the implementation. This should probably be in .pre instead of .post |
version |
3.1 |
glite-yaim-lcg-ce 4.0.5-1 |
CE_InfoLRMSType |
Type of the underlying Resource Management System |
string |
${CE_BATCH_SYS} |
glite-yaim-lcg-ce 4.0.5-1 |
STATIC_CREATE |
Path to the script that creates the ldif file |
path |
${INSTALL_ROOT}/glite/sbin/glite-info-static-create |
glite-yaim-lcg-ce 4.0.5-1 |
TEMPLATE_DIR |
Path to the ldif templates directory |
path |
${INSTALL_ROOT}/glite/etc |
glite-yaim-lcg-ce 4.0.5-1 |
CONF_DIR |
Path to the temporary configuration directory |
path |
${INSTALL_ROOT}/glite/var/tmp/gip |
glite-yaim-lcg-ce 4.0.5-1 |
LDIF_DIR |
Path to the ldif directory |
path |
${INSTALL_ROOT}/glite/etc/gip/ldif |
glite-yaim-lcg-ce 4.0.5-1 |
GlueCE_ldif |
Path to the GlueCE ldif file |
path |
${LDIF_DIR}/static-file-CE.ldif |
glite-yaim-lcg-ce 4.0.5-1 |
GlueCESEBind_ldif |
Path to the GlueCESEBind ldif file |
path |
${LDIF_DIR}/static-file-CESEBind.ldif |
glite-yaim-lcg-ce 4.0.5-1 |
LFC
As soon as the fix for bug
40619
is released, LFC specific variables can be found in the yaim lfc module.
LFC mysql
- Mandatory general variables
-
GROUPS_CONF
-
MYSQL_PASSWORD
-
SITE_NAME
-
USERS_CONF
-
VOS
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMS_CA_DN
-
VO_<vo-name>_VOMSES
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-lfc_mysql
:
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-lfc_mysql.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
LFC_DB |
The LFC DB name |
String |
cns_db |
3.0.1-0 |
LFC_DB_HOST |
The LFC DB host name |
String |
$LFC_HOST (does not work: BUG:76322 ) |
3.0.1-0 |
LFC_DB_PASSWORD |
The LFC DB user password. |
String |
mypassword |
3.0.1-0 |
LFC_HOST_ALIAS |
If you use a DNS alias in front of your LFC, specify it here. |
FQDN |
"" |
3.0.1-0 |
LFC_CENTRAL |
List of VO names whose catalog is configured as central. |
Space separated VO list |
"" |
3.0.1-0 |
LFC_LOCAL |
List of VO names whose catalog is configured as local. (do _not_ rely on the default behavior, it has bugs) |
Space separated VO list |
"" |
3.0.1-0 |
LFC oracle
- Mandatory general variables
-
GROUPS_CONF
-
SITE_NAME
-
USERS_CONF
-
VOS
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMS_CA_DN
-
VO_<vo-name>_VOMSES
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-lfc_oracle
:
Variable Name |
Description |
Value type |
Version |
LFC_HOST |
Name of the LFC node |
FQDN |
3.0.1-0 |
LFC_DB_HOST |
The Oracle connect service name (you need to define the service's connection details in tnsnames.ora, the connection description can not be set directly in LFC_DB_HOST ) |
String |
3.0.1-0< |
- Default general variables
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-lfc_oracle.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
LFC_DB |
The Oracle db user |
String |
LCG_YAIM |
3.0.1-0< |
LFC_DB_HOST |
The Oracle connect service name |
String |
$LFC_HOST (does not work: BUG:76322 ) |
3.0.1-0< |
LFC_DB_PASSWORD |
The Oracle db user password. Cannot contain @ sign |
String |
mypassword |
3.0.1-0< |
LFC_HOST_ALIAS |
If you use a DNS alias in front of your LFC, specify it here. |
FQDN |
"" |
3.0.1-0 |
LFC_CENTRAL |
List of VO names whose catalog is configured as central. |
Space separated VO list |
"" |
3.0.1-0 |
LFC_LOCAL |
List of VO names whose catalog is configured as local. (do _not_ rely on the default behavior, it has bugs) |
Space separated VO list |
"" |
3.0.1-0 |
- Mandatory general variables
-
APEL_DB_PASSWORD
-
BATCH_SERVER
-
CE_BATCH_SYS
-
CE_HOST
-
MON_HOST
-
SITE_NAME
MON
- Mandatory general variables
-
APEL_DB_PASSWORD
-
CE_HOST
-
GRIDICE_SERVER_HOST
-
MON_HOST
-
MYSQL_PASSWORD
-
SITE_NAME
-
SITE_BDII_HOST
If
GIN_BDII
is yes, then
SITE_BDII_HOST
is required (default behaviour). Otherwise,
GRIDICE_SERVER_HOST
is required.
- Default general variables
-
REG_HOST
-
JAVA_LOCATION
(will disappear in a future release)
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-mon.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
APEL_PUBLISH_USER_DN |
If set to "yes" it will enable UserDN encryption |
String |
no |
4.0.2-2 |
APEL_PUBLISH_LIMIT |
Number of records that APEL will select in one go. The value of should be adjusted according to the memory assigned to the Java VM. In general, for 512Mb the number of records should be 150000 and for 1024Mb around 300000. The default value that is included in the APEL code is 300000, as the default memory is 1024Mb. |
number |
300000 |
4.0.2-7 |
GIN_BDII |
If this is set to yes it will configure GIN to use the site BDII to populate the Glue tables in R-GMA. If set to no it will use the fmon to populate the tables. |
String |
yes |
3.0.1-0 |
MYSQL_HOST |
The name of the host where the mysql server is located |
hostname |
localhost |
4.0.2-6 |
MYSQL_REMOTE_USER |
The name of the user for access to the remote MySQL server |
user name |
root |
4.0.2-6 |
For more information, please check
the YAIM MPI wiki
.
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-mpi
:
Variable Name |
Description |
Value type |
Version |
MPI_SHARED_HOME |
Set this to set this to "yes" if you have a shared home area between WNs, to the path of a (writeable) shared area if there is one available, to "no" if there is no shared area. |
String |
0.1 |
MPI_SSH_HOST_BASED_AUTH |
Set this to "yes" if you have password-less ssh enabled between your worker nodes. |
String |
0.1 |
MPI_MPICH_MPIEXEC |
If you are using OSC mpiexec with mpich, set this to the location of the mpiexec program, e.g. "/usr/bin/mpiexec" |
String |
0.1 |
MPI_MPICH2_MPIEXEC |
If you are using OSC mpiexec with mpich2, set this to the location of the mpiexec program, e.g. "/usr/bin/mpiexec" |
String |
0.1 |
MPI_MPI_START |
Location of mpi-start if not installed in standard location /opt/i2g/bin/mpi-start |
String |
0.1 |
MPI_MPICH_ENABLE |
Set to "yes" to enable the MPICH implementation of MPI. |
String |
0.1 |
MPI_MPICH2_ENABLE |
Set to "yes" to enable the MPICH2 implementation of MPI. |
String |
0.1 |
MPI_MPICH_PATH |
Path to the MPI implementation MPICH. |
/opt/mpich-1.2.7p1 |
0.1 |
MPI_MPICH_VERSION |
Version of MPI implementation MPICH. 1.2.7p1 |
0.1 |
MPI_MPICH2_PATH |
Path to the MPI implementation MPICH2. |
/opt/mpich2-1.0.5p4 |
0.1 |
MPI_MPICH2_VERSION |
Version of MPI implementation MPICH2. |
1.0.5p4 |
0.1 |
NAGIOS
PX
- Mandatory general variables
- Mandatory service specific variables:
/opt/glite/yaim/examples/services/glite-px
:
For a better understanding of the configuration variables, please, check the official
Myproxy documentation
or the relevant section of the
myproxy.conf file
.
Variable Name |
Description |
Value type |
Version |
GRID_AUTHORIZED_KEY_RETRIEVERS |
Space separated list of the DNs of the host certificates which are autorised key retrievers (ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' . |
Hostname DN list |
4.0.3-1 |
GRID_AUTHORIZED_RENEWERS |
Space separated list of the DNs of the host certificates which are autorised renewers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
GRID_AUTHORIZED_RETRIEVERS |
Space separated list of the DNs of the host certificates which are autorised retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
GRID_DEFAULT_RENEWERS |
Space separated list of the DNs of the host certificates which are default renewers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
GRID_DEFAULT_RETRIEVERS |
Space separated list of the DNs of the host certificates which are default retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
GRID_DEFAULT_KEY_RETRIEVERS |
Space separated list of the DNs of the host certificates which are default key retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
GRID_DEFAULT_TRUSTED_RETRIEVERS |
Space separated list of the DNs of the host certificates which are default trusted retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
GRID_TRUSTED_BROKERS |
Space separated list of the DNs of the host certificates which are trusted by the Proxy node: Resource brokers, WMS and FTS servers. (ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
deprecated > 4.0.3-1 |
GRID_TRUSTED_RETRIEVERS |
Space separated list of the DNs of the host certificates which are trusted retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between ' |
Hostname DN list |
4.0.3-1 |
RB
- Mandatory general variables
-
BATCH_LOG_DIR
-
BDII_HOST
-
GRIDICE_SERVER_HOST
-
GROUPS_CONF
-
MYSQL_PASSWORD
-
RB_HOST
-
SITE_NAME
-
SITE_EMAIL
-
USERS_CONF
-
VOS
-
VO_<vo-name>_VOMSES
SCAS
- Mandatory general variables
-
GROUPS_CONF
-
USERS_CONF
-
VO_<vo-name>_VOMSES
-
VO_<vo-name>_VOMS_CA_DN
(Mandatory for glite-yaim-core > 4.0.5-7)
-
VO_<vo-name>_VOMS_SERVERS
-
VOS
- Mandatory service specific variables: None.
- Default general variables
-
X509_HOST_CERT
-
X509_HOST_KEY
-
X509_CERT_DIR
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-scas.pre or post
:
Variable Name |
Description |
Value type |
Default Value |
Version |
SCAS_CONFIG |
SCAS configuration file |
path |
${INSTALL_ROOT}/glite/etc/scas.conf |
1.0.0-1 |
SCAS_DEBUG_LEVEL |
debugging level |
number |
0 |
1.0.0-1 |
SCAS_GROUP |
scas group |
string |
scas |
1.0.0-1 |
SCAS_HOST_CERT |
host certificate path |
path |
/etc/grid-security/scascert.pem |
1.0.0-1 |
SCAS_HOST_KEY |
host key path |
path |
/etc/grid-security/scaskey.pem |
1.0.0-1 |
SCAS_LCMAPS_CONFIG |
LCMAPS configuration file |
path |
${SCAS_LCMAPS_DIR}/lcmaps-scas.db |
1.0.0-1 |
SCAS_LCMAPS_DEBUG_LEVEL |
debugging level |
number |
0 |
1.0.0-1 |
SCAS_LCMAPS_DIR |
LCMAPS configuration file directory |
path |
${INSTALL_ROOT}/glite/etc/lcmaps |
1.0.0-1 |
SCAS_LCMAPS_LOG_LEVEL |
logging level |
number |
1 |
1.0.0-1 |
SCAS_LCAS_CONFIG |
LCAS configuration file |
path |
${SCAS_LCAS_DIR}/lcas-scas.db |
1.0.0-1 |
SCAS_LCAS_DEBUG_LEVEL |
debugging level |
number |
0 |
1.0.0-1 |
SCAS_LCAS_DIR |
LCAS configuration file directory |
path |
${INSTALL_ROOT}/glite/etc/lcas |
1.0.0-1 |
SCAS_LCAS_LOG_LEVEL |
logging level |
number |
1 |
1.0.0-1 |
SCAS_LOG_DIR |
log directory |
path |
/var/log/glite |
1.0.0-1 |
SCAS_LOG_FILE |
logging file |
path |
${SCAS_LOG_DIR}/scas.log |
1.0.0-1 |
SCAS_LOG_LEVEL |
logging level |
number |
1 |
1.0.0-1 |
SCAS_PORT |
SCAS server port |
number |
8443 |
1.0.0-1 |
SCAS_USER |
scas user |
string |
scas |
1.0.0-1 |
SCAS_USER_HOME |
SCAS user home directory (see note) |
string |
(none) |
1.1.0 |
- Note: the
SCAS_USER_HOME
variable is optional. It will be used only if CONFIG_USERS
= yes. The directory will be created if it does not exist. If not set, the system default will be used.
SGE / GE utils
- Mandatory general variables
-
APEL_DB_PASSWORD
-
BATCH_LOG_DIR
= SGE accounting file>.
-
BATCH_SERVER
=
-
CE_BATCH_SYS
= "sge"
-
CE_HOST
-
JOB_MANAGER
= "sge"
-
MON_HOST
-
MY_DOMAIN
-
QUEUES
-
SITE_NAME
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-sge-utils.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
SGE_ROOT |
SGE instalation path |
path |
/usr/local/sge/pro |
glite-yaim-sge-utils-4.1.1-6 |
SGE_CELL |
SGE cell name |
string |
default |
glite-yaim-sge-utils-4.1.1-6 |
SGE_QMASTER |
Port used to contact SGE QMASTER |
port |
536 |
glite-yaim-sge-utils-4.1.1-6 |
SGE_EXECD |
Port used to contact SGE EXECD |
port |
537 |
glite-yaim-sge-utils-4.1.1-6 |
SGE_SPOOL_METH |
Spooling method |
string |
classic |
glite-yaim-sge-utils-4.1.1-6 |
SGE_CLUSTER_NAME |
Cluster Name |
string |
p536 |
glite-yaim-sge-utils-4.1.1-6 |
SGE_SHARED_INSTALL |
SGE installation shared ? |
string |
no |
glite-yaim-sge-utils-4.1.1-6 |
SE classic
- Mandatory general variables
-
BDII_HOST
-
USERS_CONF
-
GROUPS_CONF
-
VOS
-
SITE_NAME
-
SITE_EMAIL
-
VO_<vo-name>_STORAGE_DIR
-
SE_GRIDFTP_LOGFILE
-
CLASSIC_STORAGE_DIR
.
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/services/glite-seclassic
:
TAR UI
Please check this link:
TAR UI Installation and Configuration.
TAR WN
Please check this link:
TAR WN Installation and Configuration.
TORQUE
A note on the use of munge
Torque for EPEL/Fedora is built to use munge as of version 2.5.7. See the
release notes
. This means that in order to use these versions of torque, munged
must be started on the server and submit hosts (i.e., CEs) with a shared secret key in /etc/munge/munge.key. It is up to the administrator to take care of the distribution of this key, but the YAIM variable MUNGE_KEY_FILE can be used to install the key from a location that can be read by YAIM at configuration time. Leaving this variable empty means that the administrator is responsible for the installation of this key before YAIM is run, or the system will be left in a non-working state. Munge is required on all node types: CEs (submit hosts), the torque head node and the worker nodes.
TORQUE server
- Mandatory general variables
-
BATCH_SERVER
-
CE_HOST
-
CE_SMPSIZE
-
USERS_CONF
-
QUEUES
-
VOS
-
WN_LIST
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-torque-server.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
CONFIG_TORQUE_NODES |
Set it to 'no' if you want to disable the /var/torque/server_priv/nodes configuration in YAIM |
yes or no |
yes |
glite-yaim-torque-server-4.0.4-1 |
MUNGE_KEY_FILE |
Path of a file containing the munge key. Munge is required since Torque version 2.5.7. This file will be copied to /etc/munge/munge.key . |
path |
(empty) |
glite-torque-server-4.1.0-1 |
TORQUE client
- Mandatory general variables
-
BATCH_SERVER
-
CE_HOST
.
-
CE_SMPSIZE
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-torque-client.pre
:
TORQUE utils
- Mandatory general variables
-
APEL_DB_PASSWORD
-
BATCH_SERVER
-
CE_HOST
-
MON_HOST
-
QUEUES
-
SITE_NAME
-
WN_LIST
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-torque-utils.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
APEL_MYSQL_HOST |
Hostname of the server where the MySQL DB for APEL is installed. Bear in mind that in case you use the default value, which is MON_HOST , but MON_HOST is not defined in site-info.def, YAIM will complain APEL_MYSQL_HOST is not defined. |
hostname |
MON_HOST |
glite-yaim-torque-utils-4.0.4-1 |
CONFIG_MAUI |
Set it to 'no' if you want to disable the maui configuration in YAIM |
yes or no |
yes |
glite-yaim-torque-utils-4.0.4-1 |
MUNGE_KEY_FILE |
Path of a file containing the munge key of the Torque server. Munge is required since Torque version 2.5.7. This file will be copied to /etc/munge/munge.key . |
path |
(empty) |
glite-torque-utils-4.1.0-1 |
TORQUE_VAR_DIR |
Path to relocated Torque var hierarchy |
path |
/var/torque |
|
UI
- Mandatory general variables
-
BDII_HOST
-
LB_HOST
(mandatory for glite-yaim-clients < 4.0.4-4)
-
MON_HOST
(not needed anymore in gLite 3.2 UI)
-
PX_HOST
-
WMS_HOST
-
VOS
-
VO_<vo-name>_VOMSES
-
VO_<vo-name>_VOMS_CA_DN
(Mandatory for glite-yaim-core > 4.0.5-7)
- Default general variables
- Default service specific variables: they can be found in can be found in =/opt/glite/yaim/defaults/glite-ui.pre and post:
Variable Name |
Description |
Value type |
Default Value |
Version |
GLITE_SD_PLUGIN |
Service discovery settings to determine the FTS endpoint. Possible values are: 1) file : look for the FTS endpoint in a static file specified in GLITE_SD_SERVICES_XML. 2) bdii : look for the FTS endpoint dynamically from the BDII. Both options can be specified. The first one is tried first. |
string |
file,bdii |
4.0.8-1 |
GLITE_SD_SERVICES_XML |
Location of the FTS services.xml cache file. This has to be used in combination with GLITE_SD_PLUGIN="file,bdii" |
path |
"${INSTALL_ROOT}/glite/etc/services.xml" |
4.0.8-1 |
VOBOX
- Mandatory general variables
-
BDII_HOST
-
GROUPS_CONF
-
LB_HOST
(mandatory for glite-yaim-clients < 4.0.4-4)
-
MON_HOST
(not needed anymore in gLite 3.2 VOBOX)
-
PX_HOST
-
RB_HOST
(only for releases < 3.2.9)
-
SITE_NAME
-
SE_LIST
-
USERS_CONF
-
VOS
-
VO_<vo-name>_VOMSES
-
VO_<vo-name>_VOMS_CA_DN
(Mandatory for glite-yaim-core > 4.0.5-7)
-
VO_<vo-name>_SW_DIR
-
WMS_HOST
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-vobox
:
Variable Name |
Description |
Value type |
Version |
VOBOX_HOST |
The hostname of the VOBOX |
hostname |
3.0.1-0 |
GSSKLOG_SERVER |
Only when GSSKLOG is yes - The name of the AFS authentication server host. |
hostname |
3.0.1-0 |
- Default general variables
- Default service specific variables: they can be found in can be found in
/opt/glite/yaim/defaults/glite-vobox.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
VOBOX_PORT |
The port where the VOBOX gsisshd listens to |
String |
1975 |
3.0.1-0 |
GSSKLOG |
yes or no. It indicates whether the site provides an AFS authentication server which maps gsi credentials into Kerberos tokens. |
String |
no |
3.0.1-0 |
If
GSSKLOG
is set to yes, remember to define the
GSSKLOG_SERVER
.
VOMS
IMPORTANT:
For EMI installations, please refer to the latest
VOMS YAIM guide
.
Database-backend independent YAIM variables
- Mandatory general variables
Variable Name |
Description |
Value type |
Version |
VOMS_HOST |
VOMS server hostname |
hostname |
1.0.0-3 |
VOMS_DB_HOST |
Hostname of the database server. Put 'localhost' if you run the database on the same machine. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS__DB_HOST |
hostname |
1.0.0-3 |
VO_<vo-name>_VOMS_PORT |
The port on the VOMS server listening for request for each VO. This is used in the vomses configuration file. By convention, port numbers are allocated starting with 15000 |
port number |
1.0.0-3 |
VOMS_ADMIN_SMTP_HOST |
Host to which voms-admin-service-generated emails should be submitted. Use 'localhost' if you have an fully configured SMTP server running on this host. Otherwise specify the hostname of a working SMTP submission service. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_SMTP_HOST |
hostname |
1.0.0-3 |
VOMS_ADMIN_MAIL |
E-mail address that is used to send notification mails from the VOMS-admin. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_MAIL |
mail |
1.0.0-3 |
The following variables are optional. You can comment them out if you want to define it. Otherwise voms will apply a default value internally:
Variable Name |
Description |
Value type |
Version |
VOMS_ADMIN_CERT |
The path of the certificate file (in pem format) of an initial VO administrator. The VO will be set up so that this user has full VO administration privileges. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_CERT |
path |
1.0.0-3 |
VOMS_ADMIN_TOMCAT_GROUP |
The UNIX group that Tomcat is run under |
group name |
1.0.0-3 |
VOMS_ADMIN_VOMS_GROUP |
The UNIX group that the VOMS core service is run under |
group name |
1.0.0-3 |
- Default service specific variables: can be found in can be found in
/opt/glite/yaim/defaults/glite-voms.[pre|post]
:
Variable Name |
Description |
Value type |
Default Value |
Version |
VOMS_DB_TYPE |
DB type |
oracle/mysql |
mysql |
1.0.0-3 |
VOMS_DB_DEPLOY |
If set to 'true' it will attempt the creation and deployment of the database schema and initial contents (unless an existing database is found). |
true/false |
true |
1.0.0-3 |
VOMS_ADMIN_INSTALL |
Set this variable to 'false' if you don't want to configure voms-admin. |
true/false |
true |
1.0.0-3 |
VOMS_ADMIN_VERBOSE |
VOMSAdmin verbosity |
true/false |
true |
1.0.0-3 |
VOMS_ADMIN_WEB_REGISTRATION_DISABLE |
Set this variable to true if you want to disable the user registration via the voms-admin web interface. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_WEB_REGISTRATION_DISABLE |
true/false |
false |
1.0.0-3 |
VOMS_CORE_LOGROTATE_LOGNUMBER |
This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_CORE_LOGROTATE_LOGNUMBER |
number of rotated log files |
90 |
1.0.0-3 |
VOMS_CORE_LOGROTATE_PERIOD |
This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_CORE_LOGROTATE_PERIOD |
daily, weekly, monthly |
daily |
1.0.0-3 |
VOMS_CORE_TIMEOUT |
The maximum length of validity of the ACs that VOMS will grant (in seconds) The default value is 24 hours This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_CORE_TIMEOUT |
seconds |
86400 |
1.0.0-3 |
VOMS_SHORT_FQANS |
FQANs syntax that will appear in the VO extension information of a voms proxy. |
true/false |
false |
1.0.0-4 |
VOMS_PYTHONPATH |
ZSI module path |
path |
=/opt/ZSI/lib/python2.3/site-packages , useless in EMI deployments |
1.0.0-4 |
CATALINA_HOME |
Tomcat Catalina home directory |
path |
/var/lib/tomcat5 |
1.0.0-3 |
TOMCAT_USER |
Tomcat user name |
user name |
tomcat |
1.0.0-3 |
GLITE_LOCATION_VAR |
- |
path |
/var/glite |
1.0.0-4 |
GLITE_LOCATION_LOG |
- |
path |
/var/log/glite |
1.0.0-4 |
GLITE_LOCATION_TMP |
- |
path |
/tmp/glite |
1.0.0-4 |
VOMS mysql specific variables
- Mandatory service specific variables: can be found in
/opt/glite/yaim/examples/services/glite-voms_mysql
:
Variable Name |
Description |
Value type |
Version |
VO_<vo-name>_VOMS_DB_NAME |
The MySQL database name to be used to store VOMS information. |
DB name |
1.0.0-3 |
VO_<vo-name>_VOMS_DB_USER |
Name of the DB user |
DB user name |
1.0.0-3 |
VO_<vo-name>_VOMS_DB_USER_PASS |
Password of the DB user account |
password |
1.0.0-3 |
- Default service specific variables: can be found in can be found in
/opt/glite/yaim/defaults/glite-voms.[pre|post]
:
Variable Name |
Description |
Value type |
Default Value |
Version |
VOMS_MYSQL_ADMIN |
MySQL privileged user account. |
user |
root |
1.0.0-3 |
VOMS_MYSQL_CONFIG_FILE |
MySQL config file |
path |
/etc/my.cnf |
1.0.0-3 |
VOMS_MYSQL_LIBRARY |
MySQL library path |
path |
${GLITE_LOCATION}/lib/libvomsmysql.so for gLite installations, /usr/lib64/libvomsmysql.so for EMI installations |
1.0.0-3 |
VOMS_MYSQL_MAX_CONNECTIONS |
Maximum number of connections to MySQL |
number |
500 |
1.0.0-3 |
VOMS_MYSQL_PORT |
MySQL port |
port |
3306 |
1.0.0-3 |
VOMS oracle specific variables
- Mandatory service specific variables: can be found in
/opt/glite/yaim/examples/services/glite-voms_oracle
:
Variable Name |
Description |
Value type |
Version |
VO_<vo-name>_VOMS_DB_USER |
Name of the DB user |
DB user name |
1.0.0-3 |
VO_<vo-name>_VOMS_DB_USER_PASS |
Password of the DB user account |
password |
1.0.0-3 |
ORACLE_CONNECTION_STRING |
Specifies the oracle connection string. See the Examples section below for an example. This parameter can be specified per VO in the following way: VO_<vo-name>_ORACLE_CONNECTION_STRING |
oracle connection string |
1.0.0-3 |
- Default service specific variables: can be found in
/opt/glite/yaim/defaults/glite-voms.pre and /opt/glite/yaim/defaults/glite-voms.post
:
Variable Name |
Description |
Value type |
Default Value |
Version |
ORACLE_CLIENT |
Location of the Oracle Instantclient installation |
path |
Depends on the version of Oracle instantclient installed, e.g. /usr/lib/oracle/10.2.0.4/client |
1.0.0-3 |
VOMS_ADMIN_ORACLE_MAX_CONNECTIONS |
Maximum number of connections to be opened per VO |
number |
20 |
1.0.0-3 |
VOMS_ADMIN_ORACLE_MIN_CONNECTIONS |
Minimum number of connections to be opened per VO |
number |
1 |
1.0.0-3 |
VOMS_ADMIN_ORACLE_PORT |
Port number of the database server for oracle. This parameter can be specified per VO in the following way: VO_<vo-name>__ORACLE_PORT |
port number |
10121 |
1.0.0-3 |
VOMS_ADMIN_ORACLE_START_CONNECTIONS |
Startup number of connections to be opened per VO |
number |
10 |
1.0.0-3 |
VOMS_ORACLE_LIBRARY |
Path to the oracle libraries |
path |
${GLITE_LOCATION}/lib/libvomsoracle.so in gLite installations, /usr/lib64/libvomsoracle.so in EMI installations |
1.0.0-3 |
VOMS configuration examples
Below is a siteinfo and service file for a minimal VOMS mysql node configuration:
site-info.def:
MYSQL_PASSWORD="pwd"
SITE_NAME="voms-certification.cnaf.infn.it"
VOS="cert.mysql"
services/glite-voms:
VOMS_HOST=cert-voms-01.cnaf.infn.it
VOMS_DB_HOST='localhost'
VOMS_ADMIN_SMTP_HOST=iris.cnaf.infn.it
VOMS_ADMIN_MAIL=voms.administrator@cnaf.infn.it
VO_CERT_MYSQL_VOMS_PORT=15000
VO_CERT_MYSQL_VOMS_DB_USER=cert_mysql_user
VO_CERT_MYSQL_VOMS_DB_PASS=pwd
VO_CERT_MYSQL_VOMS_DB_NAME=voms_cert_mysql_db
Oracle backend
Below is a siteinfo and service file for a minimal VOMS oracle node configuration:
siteinfo.def:
VOMS_DB_TYPE="oracle"
SITE_NAME="voms-certification.cnaf.infn.it"
VOS="cert.oracle"
ORACLE_CLIENT="/usr/lib/oracle/10.2.0.4/client64"
services/glite-voms:
VOMS_HOST=cert-voms-01.cnaf.infn.it
VOMS_ADMIN_SMTP_HOST=iris.cnaf.infn.it
VOMS_ADMIN_MAIL=voms.administrator@cnaf.infn.it
ORACLE_CONNECTION_STRING="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST = voms-db-02.cr.cnaf.infn.it)(PORT = 1521)))(CONNECT_DATA=(SERVICE_NAME = vomsdb2.cr.cnaf.infn.it)))"
VO_CERT_ORACLE_VOMS_PORT=15000
VO_CERT_ORACLE_VOMS_DB_USER=admin_25
VO_CERT_ORACLE_VOMS_DB_PASS=pwd
WMS
- Mandatory general variables
-
BDII_HOST
-
GROUPS_CONF
-
MYSQL_PASSWORD
-
SE_LIST
-
SITE_NAME
-
SITE_EMAIL
-
USERS_CONF
-
VOS
-
VO_<vo-name>_SW_DIR
-
VO_<vo-name>_VOMSES
-
VO_<vo-name>_VOMS_SERVERS
-
VO_<vo-name>_VOMS_CA_DN
- Mandatory service specific variables: they can be found in
/opt/glite/yaim/examples/services/glite-wms
:
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-wms.pre
:
Variable Name |
Description |
Value type |
Default Value |
Version |
GLITE_LOCATION_VAR |
- |
path |
"/var/glite" |
4.0.1-10 |
GLITE_LOCATION_LOG |
- |
path |
"/var/log/glite" |
4.0.1-10 |
GLITE_LOCATION_TMP |
- |
path |
"/var/glite" |
4.0.1-10 |
GLITE_USER |
- |
glite user name |
"glite" |
4.0.1-9 |
WMS_CONF_FILE_OVERWRITE |
whether the older configuration file is replaced or kept |
string |
"true" |
4.0.7-5 |
WMS_EXPIRY_PERIOD |
Amount of time a job spend in WM queue before to be aborted. If too short it generates trouble with job collections. If too long it implies to have too long queues. Good compromise is 7200/10000 |
Number |
86400 |
under development |
WMS_MATCH_RETRY_PERIOD |
Time waited before to retry a match making after a first failure. |
Number |
21600 |
under development |
WN
- Mandatory general variables
-
BDII_HOST
-
MON_HOST
(not needed anymore in gLite 3.2 WN)
-
SE_LIST
-
SITE_NAME
-
USERS_CONF
-
VOS
-
VO_<vo-name>_SW_DIR
-
VO_<vo-name>_VOMS_CA_DN
(Mandatory for glite-yaim-core > 4.0.5-7)
-
VO_<vo-name>_VOMSES
- Optional service specific variables: they can be found in
/opt/glite/yaim/examples/siteinfo/services/glite-wn
:
- Default service specific variables: they can be found in
/opt/glite/yaim/defaults/glite-wn.post
:
--
MariaALANDESPRADILLO - 29 Jul 2008