VCycle on EGI
Requirements
Create/Use a machine with a recent version of SL6 and where you have root access. Create a host certificate for this machine and ensure that both the hostkey.pem and hostcert.pem are available in /etc/grid-security. Communicate the DN to the VO if it is required to be known to the experiment framework in order to run the pilot job or job wrapper.
Installation
wget -O /etc/yum.repos.d/rocci-cli.repo http://repository.egi.eu/community/software/rocci.cli/4.3.x/releases/repofiles/sl-6-x86_64.repo
wget http://koji.cern.ch/kojifiles/packages/python-novaclient/2012.1/1.el6/noarch/python-novaclient-2012.1-1.el6.noarch.rpm
yum localinstall python-novaclient-2012.1-1.el6.noarch.rpm
yum install -y rpm-build git httpd mod_ssl python-novaclient occi-cli ca-policy-egi-core voms-clients
#git clone https://github.com/vacproject/vcycle.git
git clone https://github.com/Villaz/vcycle
cd vcycle; make rpm; cd ..
yum localinstall /root/vcycle/trunk/RPMTMP/RPMS/noarch/vcycle-0.3.0-1.noarch.rpm
Configuration
Add each tennat to /etc/vcycle.conf.
[tenancy xyz]
tenancy_name = egi
url = https://host-invalid:8787/
proxy = /tmp/x509up_u0
max_machines = 16
type = occi
[vmtype xyz queue]
ce_name = vcycle-xyz.cern.ch
max_machines = 16
backoff_seconds = 600
fizzle_seconds = 400
max_wallclock_seconds = 14400
image_name = 1b06a044-024d-43ce-97df-8fb14d1fea6c
flavor_name = small-1core3gb50gb
x509dn = /DC=ch/DC=cern/OU=computers/CN=host-invalid
heartbeat_file = vm-heartbeat
heartbeat_seconds = 14400
network = /network/public
The DN is the DN of the host certificate. The proxy should be the user proxy of a user from the VO who is able to start VMs on the EGI Federated Cloud. The site specific peramters can be found in the
EGI APP DB
.
Create the userdata file for the site and place it in /var/lib/vcycle/user_data with the naming convention xzy:queue
###Copy the vcycle-httpd.conf file (in the git directory) to "/etc/httpd/conf/httpd.conf"####
service httpd restart
service vcycle restart
Troubleshooting
- Create a new virtual VM with occi:
occi --endpoint <endpoint> --action create --resource compute --attribute occi.core.title='vm' --mixin os_tpl#<os_tpl> --mixin resource_tpl#<flavour> --context user_data=<user_data> --auth x509 --user-cred /tmp/x509up_u0 --voms
- This command should return a VM identifier, with this identifier you should do:
occi --endpoint <endpoint> --action describe --resource <id_vm> --auth x509 --user-cred /tmp/x509up_u0 --voms
- This command returns the information about the VM, the most important information is the IP address, look if the IP is public or it is private. If it is private you should do:
occi --endpoint <endpoint> --action trigger --resource <id_vm> --trigger /network/public --auth x509 --user-cred /tmp/x509up_u0 --voms
occi --endpoint <endpoint> --action link --resource <id_vm> --auth x509 --user-cred /tmp/x509up_u0 --voms --link /network/public