Deployment SCENARIO: "Mixed Mode"

Before Starting

  1. HOSTNAME: emitestbed34.cnaf.infn.it + 2 IP for virtual machines emitestbed35.cnaf.infn.it, emitestbed36.cnaf.infn.it
  2. OS: SL5 X86_64 Installed
  3. No Host certificate required
  4. No Network Bridge configured
  5. Hardware must support virtualization (please run grep --color vmx /proc/cpuinfo)

Service Installation

  1. Repositories ( see EMI basic configuration): egi-trustanchors.repo + emi-2-rc-sl5.repo + epel.repo
    1. $> yum clean all
    2. $> yum makecache
    3. $> yum install ca-policy-egi-core
    4. $> yum install lcg-CA
    5. $> yum install yum-protectbase.noarch
  2. INSTALLING WN + TORQUE
    1. $> yum install emi-wn emi-torque-client
    2. $> yum install emi-release
    3. $> yum install kvm-qemu-img
    4. $> yum install kmod-kvm
    5. $> yum install libvirt
    6. $> yum install python-virtinst
    7. $> yum install pyOpenSSL
  3. INSTALLING WNODES
    1. $> yum install wnodes*

Service Configuration

CONFIGURE WN with Torque/Maui

  1. Install WN with torque following deployment logbook here WN deployment logbook, excluding GLEXEC, MPI
    1. $> cp /etc/munge/munge.key from the CE
    2. $> chown munge /etc/munge/munge.key
    3. $> /etc/init.d/munge start
  2. Wnodes specific configuration
    1. $> vim /etc/wnodes/nameserver/mac_list.ini
    2. $> vim /etc/wnodes/nameserver/wnodes_hv_config.ini
    3. $> vim /etc/wnodes/nameserver/wnodes_bait_config.ini
    4. $> vim /etc/wnodes/manager/wnodes.ini
    5. $> service wnodes_nameserver start
    6. $> wnodes_manager -a wnodes-emi-images http torquemada.cr.cnaf.infn.it/wnodes/wnodes_sl5_wn_emi x86_64 raw /dev/mapper/VolGroup00-LogVol00
    7. $> wnodes_manager -l
    8. $> vim /etc/wnodes/hypervisor/wnodes.ini
    9. $> vim /etc/wnodes/bait/wnodes.ini
    10. $> mkdir -p /usr/local/wnodes/repo --> workaround
    11. $> service libvirtd start
    12. $>service wnodes_hypervisor start --> this will start the process wnodes_bait
    13. $>wnodes_manager -t all
    14. $>wnodes_manager -s emitestbed34
    15. $>wnodes_manager -S emitestbed34
    16. $> chmod 500 /usr/bin/wnodes/site_specific/wnodes_preexec
    17. $> wget patch_wnodes_preexec.txt applying patch
    18. $> cp /etc/wnodes/site_specific/wnodes_preexec.conf.tpl /etc/wnodes/site_specific/wnodes_preexec.conf
    19. $> vi /etc/wnodes/site_specific/wnodes_preexec.conf
    20. $> cat /var/torque/mom_priv/prologue
#!/bin/bash
while [ ! -f /usr/bin/wnodes/site_specific/wnodes_preexec ]; do sleep 3 ; done
sleep 10
/usr/bin/wnodes/site_specific/wnodes_preexec -f /etc/wnodes/site_specific/wnodes_preexec.conf --jobid $1 --username $2 &> /root/prologue.txt
    1. $> chmod 500 /var/torque/mom_priv/prologue
  1. Wnodes specific configuration: WN image

Configuration ON Torque server

[root@emi-demo13 ~]# cat siteinfo/wnodes_queue_command
create queue emiwnodes
set queue qwnodes queue_type = Execution
set queue qwnodes Priority = 1000000
set queue qwnodes max_running = 80
set queue qwnodes resources_max.cput = 100:00:00
set queue qwnodes resources_max.walltime = 100:00:00
set queue qwnodes resources_default.neednodes = cloudtf
set queue qwnodes enabled = True
set queue qwnodes started = True

[root@emi-demo13 ~]# qmgr  < /root/siteinfo/wnodes_queue_command 
Max open servers: 9
create queue qwnodes
set queue qwnodes queue_type = Execution
set queue qwnodes Priority = 1000000
set queue qwnodes max_running = 80
set queue qwnodes resources_max.cput = 100:00:00
set queue qwnodes resources_max.walltime = 100:00:00
set queue qwnodes resources_default.neednodes = cloudtf
set queue qwnodes enabled = True
set queue qwnodes started = True

[root@emi-demo13 ~]# set qwnodes cloudtf resources_default.neednodes = cloudtf
  117  qmgr -c "set server managers += root@emitestbed35.cnaf.infn.it"
  118  qmgr -c "set server managers += root@emitestbed36.cnaf.infn.it"
  119  qmgr -c "set server managers += root@emitestbed34.cnaf.infn.it"
  122  qmgr -c "set queue demo resources_default.neednodes = lcgpro"

[root@emi-demo13 ~]# cat /var/torque/server_priv/nodes
emitestbed23.cnaf.infn.it np=2 lcgpro
emitestbed34.cnaf.infn.it np=3 cloudtf bait 
emitestbed35.cnaf.infn.it np=1 qwnodes
emitestbed36.cnaf.infn.it np=1 qwnodes

[root@emi-demo13 ~]# cat /var/spool/maui/maui.cfg
# MAUI configuration example

SERVERHOST              emi-demo13.cnaf.infn.it
ADMIN1                  root
ADMIN3                  edginfo rgma edguser ldap
ADMINHOSTS              emi-demo13.cnaf.infn.it 
RMCFG[base]             TYPE=PBS
SERVERPORT              40559
SERVERMODE              NORMAL

# Set PBS server polling interval. If you have short # queues or/and jobs it is worth to set a short interval. (10 seconds)

RMPOLLINTERVAL        00:00:10

# a max. 10 MByte log file in a logical location

LOGFILE               /var/log/maui.log
LOGFILEMAXSIZE        10000000
LOGLEVEL              1

# Set the delay to 1 minute before Maui tries to run a job again, # in case it failed to run the first time.
# The default value is 1 hour.

DEFERTIME       00:01:00

# Necessary for MPI grid jobs
ENABLEMULTIREQJOBS TRUE
NODECFG[emitestbed34.cnaf.infn.it] PARTITION=virtual
CLASSCFG[qwnodes] PLIST=virtual PDEF=virtual

[root@emi-demo13 ~]#  /etc/init.d/maui restart
Shutting down MAUI Scheduler:                              [  OK  ]
Starting MAUI Scheduler:                                   [  OK  ]
[root@emi-demo13 ~]# 

Service Testing

+++--- On WN hosting Wnodes server

  1. Check daemons:

+++--- On Torque server

  1. Enter a pool account user: $>su - tst01
  2. Submit a test job
    1. $> qsub -q qwnodes test.sh -> (where test is a bash executable with commands like /bin/hostname inside)
    2. $> qstat -a





-- DaniloDongiovanni - 04-May-2012

Edit | Attach | Watch | Print version | History: r8 | r6 < r5 < r4 < r3 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r4 - 2012-05-21 - DaniloDongiovanniExternal
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback