Here is an example how to configure Job Priority in order to publish VOVIEWS.
Torque-Server and CE are on separate boxes.
Torque-server configuration
- Set the VOs and queues (that you want to publish) in site-info.def:
e.g
VOS="dteam atlas cms ops...."
QUEUES="${VOS} short"
ATLAS_GROUP_ENABLE="atlas"
CMS_GROUP_ENABLE="cms"
DTEAM_GROUP_ENABLE="dteam"
OPS_GROUP_ENABLE="ops"
SHORT_GROUP_ENABLE="atlas /atlas/ROLE=production /atlas/ROLE=lcgadmin cms /cms/ROLE=lcgadmin /cms/ROLE=production"
- Run the configuration on Torque-Server:
$ /opt/glite/yaim/bin/yaim -c -s site-info.def -n TORQUE_server -n TORQUE_utils
- Run qmgr to check the access role of VOs.
e.g for ATLAS it should be look like:
set queue atlas acl_group_enable = True
set queue atlas acl_groups = +atlas
set queue atlas acl_groups += atlasprd
set queue atlas acl_groups += atlassgm
lcg-CE configuration (glite3.1 SLC4):
- Install latest version of rpms:
glite-yaim-core
glite-yaim-lcg-ce
glite-yaim-torque-utils
- Installed latest version of Information provider:
lcg-info-dynamic-scheduler-pbs-2.0.1-1
lcg-info-dynamic-scheduler-generic-2.2.2-1
- Check the variables in site-info.def:
FQANVOVIEWS=yes
> VOS="dteam ops atlas cms ..."
>QUEUES="${VOS}"
> (each VO)_GROUP_ENABLE="(VO)"
> e.g OPS_GROUP_ENABLE="ops"
If you define a new queue then
> QUEUES="${VOS} new_queue"
(new_queue)_GROUP_ENABLE="atlas /VO=atlas/GROUP=/atlas/ROLE=production /VO=atlas/GROUP=/atlas/ROLE=lcgadmin cms /VO=cms/GROUP=/cms/ROLE=lcgadmin /VO=cms/GROUP=/cms/ROLE=production"
- Run the configuration on CE:
$ /opt/glite/yaim/bin/yaim -c -s site-info.def -n lcg-CE -n TORQUE_utils
- Check the file
/opt/glite/etc/lcg-info-dynamic-scheduler.conf
it should be look like:
-------------
[Main]
static_ldif_file: /opt/glite/etc/gip/ldif/static-file-CE.ldif
vomap :
atlassgm:/atlas/Role=lcgadmin
atlasprd:/atlas/Role=production
cmssgm:/cms/Role=lcgadmin
cmsprd:/cms/Role=production
and so on...
---------------
- Run the Scheduler and take a look for publishing GlueVOViewLocalID
dn: GlueVOViewLocalID=/atlas/Role_production,GlueCEUniqueID=lxb1937.cern.ch:2119/jobmanager-lcgpbs-short,mds-vo-name=resource,o=grid
GlueVOViewLocalID: /atlas/Role_production
GlueCEStateRunningJobs: 12
GlueCEStateWaitingJobs: 5
Maui configuration examaple:
This is one example of MAUI configuration for fair-sharing.
QUEUETIMEWEIGHT 2
XFACTORWEIGHT 10
XFACTORCAP 100000
RESWEIGHT 10
CREDWEIGHT 30
USERWEIGHT 10
GROUPWEIGHT 10
FSWEIGHT 20
FSUSERWEIGHT 1
FSGROUPWEIGHT 10
FSQOSWEIGHT 100
FSPOLICY DEDICATEDPES%
FSDEPTH 24
FSINTERVAL 24:00:00
FSDECAY 0.99
FSCAP 100000
FSUSERWEIGHT 1
FSGROUPWEIGHT 10
FSQOSWEIGHT 100
USERCFG[DEFAULT] FSTARGET=7 MAXJOBQUEUED=350
GROUPCFG[atlas] FSTARGET=10 PRIORITY=100 MAXPROC=14 QDEF=lhcatlas
GROUPCFG[atlasprd] FSTARGET=20 PRIORITY=200 MAXPROC=14 QDEF=lhcatlas
GROUPCFG[atlassgm] PRIORITY=300 MAXPROC=1 QDEF=lhcatlas
QOSCFG[lhcatlas] FSTARGET=30 MAXPROC=14
GROUPCFG[cms] FSTARGET=10 PRIORITY=100 MAXPROC=14 QDEF=lhccms
GROUPCFG[cmsprd] FSTARGET=20 PRIORITY=200 MAXPROC=14 QDEF=lhccms
GROUPCFG[cmssgm] PRIORITY=300 MAXPROC=1 QDEF=lhccms
QOSCFG[lhccms] FSTARGET=30 MAXPROC=14
Above lines should be appended at the end of /var/spool/maui.cfg on TORQUE server and CE.
Note that running configuration of node YAIM will overwritten these changes.
--
FaridaNaz - 03 Mar 2008