OpenNebula Evaluation

Notes on the evaluation of OpenNebula for the management of SA2 machines.

(IN PROGRESS)

User Account

The Virtual Infrastructure is administered by the oneadmin account, this account will be used to run the OpenNebula services and to do regular administration and maintenance tasks. OpenNebula supports multiple users to create and manage virtual machines and networks. You will create these users later when configuring OpenNebula.

Follow these steps:

Create the cloud group where OpenNebula administrator user will be:

$ groupadd cloud

Create the OpenNebula administrative account (oneadmin), we will use OpenNebula directory as the home directory for this user:

$ useradd -d /srv/cloud/one -g cloud -m oneadmin

Add oneadmin in the sudo group editing sudoers file

$ visudo -f /etc/sudoers

add this line ad the end of the file

oneadmin ALL=(ALL) NOPASSWD: ALL

Get the user and group id of the OpenNebula administrative account. This id will be used later to create users in the cluster nodes with the same id:

$ id oneadmin uid=1002(oneadmin) gid=1003(cloud) groups=1003(cloud)

In this case the user id will be 1003 and group also 1003.

Create the group account also on every node that run VMs. Make sure that its id is the same as in the front-end. In this example 1001:

$ groupadd --gid 1003 cloud $ makedir /srv/cloud $ useradd --uid 1002 -g cloud -d /srv/cloud/one oneadmin

You can use any other method to make a common cloud group and oneadmin account in the nodes, for example NIS.

Environment variable settings (when you are logged as oneadmin)

$ sudo vim ~/.bashrc

export ONE_AUTH="oneadmin:password_of_oneadimsud" export ONE_LOCATION="/srv/cloud/one" export ONE_XMLRPC="http://localhost:2633/RPC2" export PATH=$PATH:$ONE_LOCATION/bin

Networking Add the every host a line inside of the /etc/hosts file in order to identify the other hosts in the network:

For example in node omii001: $ vim /etc/hosts

  1. 154.5.21 solaris

Secure Shell Access You need to create ssh keys for oneadmin user and configure machines so it can connect to them using ssh without need for a password.

Generate oneadmin ssh keys:

$ ssh-keygen (if you want to use DSA "ssh-keygen Generate oneadmin ssh keys-t dsa") Generating public/private rsa key pair. Enter file in which to save the key (/srv/cloud/one/.ssh/id_rsa): Created directory '/srv/cloud/one/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /srv/cloud/one/.ssh/id_rsa. Your public key has been saved in /srv/cloud/one/.ssh/id_rsa.pub. The key fingerprint is:

  1. c:69:1a:18:d3:48:6b:aa:84:ee:a9:59:46:25:02:db oneadmin@etics-01NOSPAMPLEASE.cnaf.infn.it

We can give enter during the generation, otherwise we can choose the path where the keys will be saved and a optional different pass-phrase (in that case we need to set up ssh-agent and ssh-add, in order to be asked for the pass-phrase only the first time)

Public key installation (the one ending in .pub): we need to copy to the server we want to connect

$ sudo scp ~/.ssh/id_rsa.pub oneadmin@omii001NOSPAMPLEASE.cnaf.infn.it:~/.ssh/id_dsa.pub.etics-01

Log in to the server and add the public key to the authorized_keys file $ ssh omii001.cnaf.infn.it $ cd .ssh $ cat id_rsa.pub.etics-01 >> authorized_keys $ exit

(When prompted for password press enter so the private key is not encrypted. Copy public key to ~/.ssh/authorized_keys to let oneadmin user log without the need to type password. Do that also for the front-end:

$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Tell ssh client to not ask before adding hosts to known_hosts file. This goes into ~/.ssh/config:

$ cat ~/.ssh/config Host * StrictHostKeyChecking no

Check that the sshd daemon is running in the cluster nodes. oneadmin must able to log in the cluster nodes without being prompt for a password. Also remove any Banner option from the sshd_config file in the cluster nodes.

NFS settings

Server side: Install portmap and nfs-kernel-server $ sudo apt-get install portmap nfs-kernel-server add the ip address of the nfs client in the export file: $ sudo vim /etc/exports /srv/cloud/one/var 131.154.100.175 (rw,sync,no_subtree_check)

Restart the services: $ sudo /etc/init.d/portmap restart $ sudo /etc/init.d/nfs-kernel-server restart

Client Side: $ sudo apt-get install portmap nfs-common (/nfs-utils) edit hosts.allow and hosts.deny files: $ sudo vim /etc/hosts.deny portmap : ALL

$ sudo nano /etc/hosts.allow portmap : server_ip_address create a directory to use as a muont point (if necessary) $ mkdir /srv/cloud/one/var edit fstab files in order to enable mounting of NFS space during the boot of the OS: $ sudo vim /etc/fstab server_ip_addres:/srv/cloud/one/var /srv/cloud/one/var nfs rw 0 0 for the first time, you can run $ sudo mount -t nfs server_ip_addres:/srv/cloud/one/var /srv/cloud/one/var in that way you can verify everything is working, it could be necessary to start portmap and nfslock service: $ portmap start $ nfslock start and in that case you should add to the services to the DEAMONS list in the /etc/rc.conf(/rc.local) list in order the run them automatically at system start: $ sudo vim /etc/rc.conf

if you want to check rpc you can: $ sudo rpcinfo -p In red hat OS you can check the list of service $ chkconfig --list and add a service to the ones automatically loaded at system start: $ chkconfig service_name on

RHEL5 Note Possible problem mounting nfs file system, on the host, at boot time. If the host boot into runlevel 3, in /etc/rc3.d there should be some files called:

K75netfs S10network

K75netfs takes care of mounting network volumes, and S10network takes care of bringing up network interfaces.

These files are run in alphabetical order when the system enters runlevel 3. So, it was trying to mount the NFS volumes before the network interfaces were brought up, and that, of course, fails.

The way I solved this problem was that I renamed K75netfs to S96netfs to make it run a good bit later in the init process.

$ cd /etc/rc3.d $ mv K75netfs S96netfs

You need to do this for whatever runlevels you want to have NFS mounts automatically mounted in. So, if you want this to happen in runlevel 5, just make the same changes in /etc/rc5.d

The network shares are mounted with the netfs service, so this initscript is to be enabled in your runlevel (3 or 5) Software Packages

This machine will act as the OpenNebula server and therefore needs to have installed the following software: ruby >= 1.8.5 -!- sqlite3 >= 3.5.2 xmlrpc-c >= 1.06 openssl >= 0.9 ssh

Ubuntu 9.04 note: sqlite3 $ sudo apt-get install sqlite3 libsqlite3-devsudo gem install sqlite3-ruby

SLC4X note: $ gunzip sqlite3.bin.gz $ chmod a+x sqlite.bin $ ./sqlite.bin

$ wget xmlrpc-c.tgz $ chmod a+x xmlrpc-c.tgz $ tar xvfz xmlrpc-c.tgz $ cd xmlrpc-c $ ./configure $ make $ make install

$ sudo yum install gcc-c++

$ wget http:addrees_of_ruby_download $ tar xfz ruby-latest.tar.gz $ cd ruby-1.8.* $ ./configure --prefix=/usr $ make $ sudo make install

Additionally, to build OpenNebula from source: scons >= 0.97 -!- g++ >= 4 (--> GMP + MPFR) flex >= 2.5 (optional, only needed to rebuild the parsers) bison >= 2.3

Optional Packages: -!- ruby-dev -!- rubygems -!- rake make

Download and untar the OpenNebula tarball. Change to the created folder and run scons to compile OpenNebula

scons

edit if necessary: /srv/cloud/one/etc/oned.conf

Note: Be sure that VM_DIR is set to the path where the front-end's $ONE_LOCATION/var directory is mounted in the cluster nodes

Language setting In order to avoid incompatibility issues, it is recommended to set English language in every node. If your system has a different language setting you can overwrite it in the bashrc file: $ sudo vim ~/.barshrc export LANG=en_GB.UTF-8 [Node] Migration

OpenNebula uses libvirt's migration capabilities. More precisely, it uses the TCP protocol offered by libvirt. In order to configure the physical nodes, the following files have to be modified: - /etc/libvirt/libvirtd.conf : Uncomment “listen_tcp = 1”. Security configuration is left to the admin's choice, file is full of useful comments to achieve a correct configuration. - /etc/default/libvirt-bin : add -l option to libvirtd_opts

Network settings, image modification

Using the Leases within the Virtual Machine Hyper visors can attach a specific MAC address to a virtual network interface, but Virtual Machines need to obtain an IP address.

Configuring the Virtual Machine to use the Leases With OpenNebula you can also derive the IP address from the MAC address using the MAC_PREFFIX:IP rule. In order to achieve this we provide a context script for Debian based systems. This script can be easily adapted for other distributions, check dev.opennebula.org.

Copy the script (in that case ONE_LOCATION is the directory where binery file has been extracted, not where has been installed) $ ONE_LOCATION/share/scripts/vmcontext.sh

into the: /etc/init.d directory in the VM root file system. (*)

Execute the script at boot time before starting any network service, usually runlevel 2 should work: $ ln -s ../init.d/vmcontext.sh etc/rc2.d/S01vmcontext.sh if it doesn't work try with: $ ln etc/init.d/vmcontext.sh etc/rc2.d/S01vmcontext.sh

(*) In order to modify the files in VM root file system you can mount partitions in loop devices, running this commands: Partition list inside of the images: $ kpartx -l disk.img

Setting of loop devices in /dev/mapper: $ sudo kpartx -av /srv/cloud/one/images/disk.img

Mount of a loop device over a partition: $ sudo mkdir /mnt/partition $ sudo mount -t ext3 /dev/mapper/loop0p1 /mnt/partition/

Now we can modify the files paying attention root directories (like /etc) are not allowed, but instead we have to use mount_point/etc

Umounting: $ sudo umount /mnt/partition

Deleting of loop devices: $ sudo kpartx -dv /srv/cloud/one/images/disk.img

Preparing the Cluster

Cluster nodes Check-list ACTION DONE/COMMENTS host-names of cluster nodes ruby, sshd installed in the nodes oneadmin can ssh the nodes pasword-less

Create the following hierarchy in the front-end root file system: /srv/cloud/one, will hold the OpenNebula installation and the clones for the running Vms /srv/cloud/images, will hold the master images and the repository $ tree /srv /srv/ | -- cloud |-- one -- images

KVM and QEMU KVM configuration

The cluster nodes must have a working installation of KVM, that usually requires: CPU with VT extensions libvirt >= 0.4.0 the qemu user-land tools kvm kernel modules (kvm.ko, kvm-{intel,amd}.ko). Available from kernel 2.6.20 onwards

First of all check your system has kvm kernel modules installed $ modprobe -l | grep kvm /lib/modules/2.6.18-128.7.1.el5/extra/kvm.ko /lib/modules/2.6.18-128.7.1.el5/extra/kvm-amd.ko /lib/modules/2.6.18-128.7.1.el5/extra/kvm-intel.ko If you don't have kvm kernel modules installed, install them with: $ yum install kvm If you have problems follow the next section “Installation of KVM” OpenNebula uses the libvirt interface to interact with KVM, so the following steps are required in the cluster nodes to get the KVM driver running: The remote hosts must have the libvirt daemon running. The user with access to these remotes hosts on behalf of OpenNebula (typically ) has to pertain to the and groups in order to use the deaemon and be able to launch VMs.

Installation of KVM If the kernel of your distribution doesn't have kvm, or if you want to update you have to follow steps: Requirements: kernel Linux >= 2.6.16 kernel headers and devel $ sudo yum install kernel-devel kernel-headers gcc 3.x $ sudo yum install gcc SDL library and headers $ sudo yum install SDL zlib $ sudo yum install zlib-devel alsa $ wget alsa.tar.bz2 ftp://ftp.alsa-project.org/pub/lib/alsa-lib-1.0.21a.tar.bz2 $ tar xvfj alsa.tar.bz2 $ cd alsa-lib $ ./configure $ sudo make install libuuid $ sudo yum install ta

$ wget $ tar -xvzf kvm-release.tar.gz $ cd kvm-release $ ./configure --prefix=/usr/local/kvm $ make $ sudo make install If you have a Intel CPU: $ sudo /sbin/modprobe kvm-intel If you have a Amd CPU: $ sudo /sbin/modprobe kvm-amd Installation of Libvirt ********************************* installation from source → dependency problems.. $ wget libvirt. $ ./configure –prefix=$HOME/usr ( --with-rhel5-api=yes ) $ make $ make install configure issue: if you have some trouble try with $./configure --prefix=$HOME/usr --without-xen maybe you have to install this packages libxml2-dev: $ sudo yum install libxml2-devel $ sudo yum install gnutls $ sudo yum install gnutls-devel $ sudo yum install cyrus-sasl-devel ************************************* Add an external repository in order to obtain latest rmp for SL5

$ sudo vim /etc/yum.repos.d/slc5-updates.repo

[slc5-updates] name=Scientific Linux CERN 5 (SLC5) bugfix and security updates baseurl=http://linuxsoft.cern.ch/cern/slc5X/$basearch/yum/updates/ gpgcheck=0 enabled=1 protect=1

$ yum install libvirt.x86_64 $ yum install libvirt-python.x86_64

Other host software $ sudo yum insall ruby

Red hat note: Edit sudoers file on the remote machine via sudo. search for 'Defaults requiretty'and comment it out. $ vim /etc/sudoers #Defaults requiretty

Edit libvirt configuration: $ sudo vim /etc/libvirt/libvirtd.conf

# opennebula modification change this lines: # listen_tls = 0 # listen_tcp = 1 # unix_sock_group = "libvirt" #unix_sock_rw_perms = "0770" #auth_unix_ro = "none" #auth_unix_rw = "none"

# whit this others: listen_tls = 0 listen_tcp = 1 unix_sock_group = "cloud" unix_sock_rw_perms = "0770" auth_unix_ro = "none" auth_unix_rw = "none"

($ yum update ) $ yum install virt-manager $ yum install ruby ruby-devel ruby-docs ruby-ri ruby-irb ruby-rdoc

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2009-11-09 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ETICS All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback