CVI User Guide for IT-GT

A simple userguide to the Cern Virtual Infrastructure.

NB - This document is NOT intended to replace any official guidance from central IT services, but merely a suggestion of how to use the service as a replacement for the vnode machines, commonly used for certification and other short-turnaround tasks

Due to the different architecture of the CVI service (running on Microsoft Hyper-V) compared to the in-house vnode service (using Xen) there is a difference in speed to deploy an initial image from scratch, however CVI offers the user the chance to save and restore from snapshots making the subsequent re-use of a machine simple.

This Document assumes that:

  • You already have a CERN id
  • You have already given your public SSH key to the testbed administrators

Prerequisites

You have to be registered by IT-GT CVI administrator to have permissions to use IT-GT CVI infrastructure. In details:
  • Go to CVI VM request form, find the "Host group" field and try to select other than Central Service. If you do not see other hostgroups, esp. those which name starts with "IT-GT", it means you are not registered as IT-GT CVI user - ask IT-GT admins for it sending e-mail to it-gt-cvi-admin@cern.ch, specifying your afs userid.
  • You must be registered in our system for authentication (old VNODE users already are). If not (eg. you deploy the machine using suggested template and do not have access to it) please read information about accessing the IT/GT resources and contact it-gt-cvi-admin@cern.ch.

How to Request a Machine

screenshot-req.png

  1. Click Request a Virtual Machine if you don't have the fields shown above
  2. The Owner of the machine should normally be left as yourself. It is possible to put an e-group name here if you want other members of that group to have admin rights over the machine (please ensure that you're a member of the group first, or you won't be able to stop your machine...)
  3. Name the computer (this goes into the NETDB)
  4. Give a meaningful description of the machine - This also goes into the netdb and assists system administrators to identify possible anomalous behaviour
  5. Change Host Group to IT-GT\Users (other groups are reserved for special roles)
  6. Leave as 'Best Rated' and it will assign the machine to the hypervisor with the lowest load
  7. Choose an Image for the machine. Ones suitable for general use are:
    • SCIENTIFIC LINUX 5 / x86_64 VM CVIVM
    • SCIENTIFIC LINUX 6 / x86_64 VM CVIVM
    • Debian LINUX 6 / amd64 VM CVIVM
    • ETICS WN / SL5 X86_64 EU (note the 'End User' variant only!)
    • (For more details about available templates - please read section about OS templates.)
  8. If you wish the machine to automatically be decomissioned, fill in an expiry date here
  9. For normal certification usage, we recommend
    • max. 1GB memory
    • 1 CPU
    • 10 GB System disk
  10. Finally select '*Request*'
You should see Request successfully submitted. You will receive an email when virtual machine is ready. at the bottom of the screen.

You will receive 2 emails - one confirming the request, and another (~15 mins?) later confirming that the machine is ready. The My Request List menu on the side will show the current state of the request process.

Once the machine is ready, you may ssh in directly as user root (using your private key) and customize to fit your requirements, add packages etc.

Please note that in case of some templates (eg. SLC) you may have to wait some time (15-30min) before you can access (even if you received email stating 'your machine is ready'!). Tthe machine may do software update during the startup. So the machine may respond for ping requests, but ssh may return 'connection refused' as system startup is not completed.

Adding a host certificate

If you are testing any grid services that require a host certificate, these can be obtained from the CERN CA:
  • Go to host certificate management
  • in the Request Host Certificates window you should see the machine you created above
  • choose [select] next to the hostname
screenshot-CA.png
  • follow the instructions to generate the key signing request (NB - use the hostname of the machine you created, not host.cern.ch in the request if you copy and paste!)

Checkpointing

Once you have verified that the machine has a basic install of the OS, any extra packages you need, it is a good idea to save a checkpoint. This can then be used to restore from faster than creating and customising a fresh machine.

Create New CheckPoint

screenshot-vm_details.png
  • Choose 'CheckPoints' on the RHS menu
  • Give the checkpoint a suitable name and description and select Create
screenshot-chpt.png

Restore from CheckPoint

  • Select the machine from [Manage my Virtual Machines] as screenshot above
  • Select 'CheckPoints'
  • Choose the checkpoint you saved previously from the drop-down
  • select Restore

Known Issues

Machine's clock not correct after restoring a checkpoint
In case you have big time mismatch after restoring your virtual machine (checkpointed state of the machine preserves also time!) and you are having issues eg using kerberos (kinit) you may speed-up time synchronization and enforce updating machine clock manually eg.:
# /etc/init.d/ntpd stop
# ntpdate ip-time-1.cern.ch
# /etc/init.d/ntpd start

Template details

Some generic information about the machine templates (the images used to create the machine) follows. Please note that more detailed information is contained in the changelog.

Template OS (VMM / Network DB) (1) Template name in CVI (2) Description
SCIENTIFIC LINUX 5 / x86_64 CVIVM sl5-64-base - common purpose, generic (VNODE-like) SL5-64 OS template, more info in below
SCIENTIFIC LINUX 5 / x86_64 CVIVM RC sl5-64-base-rc - test version of SCIENTIFIC LINUX / 5 x86_64 CVIVM
SCIENTIFIC LINUX 6 / x86_64 CVIVM sl6-64-base - common purpose, generic (VNODE-like) SL6-64 OS template, more info in below
SCIENTIFIC LINUX / 6 X86_64 CVIVM OPT sl6-64-opt - as sl6-64-base, optimized for better performance under Hyper-V, more info below
Debian GNU/Linux 6 / amd64 CVIVM debian6-64-base - common purpose, generic (VNODE-like) Debian 6 (squeeze) / amd64 OS template
Debian GNU/Linux 6 / amd64 CVIVM OPT debian6-64-opt - as debian6-64-base, optimized for better performance under Hyper-V, more info below
ETICS WN / SL5 X86_64 EU WN-sl5-epel-users This is the same image as used by the etics build machines, but Condor has been disabled (so that they don't interfere with the ETICS infrastructure
CVI VM SOLARIS / OSOL0906 BASE osol0906-base - experimental version of OpenSolaris template

(1), (2) - creating machine use:

Please note that we describe templates only provided/maintained by IT/GT. Other VM templates which are available are provided by CVI-team as part of Central Service (and we cannot make any changes there!).

"Base" vs. "Opt" OS templates

In several cases there are 2 types of templates: a base template, and an optimized version of the template. The differences are explained in details in description of the templates, but most important common information is that:
  • base templates - are supposed to be possibly generic, unmodified OS (similar to any generic system installed from original distribution)
  • opt (-imized) templates - are configured for better performance when running on Hyper-V hypervisors
Optimization in the second case implies some changes on the kernel level (additional drivers or even other kernel version) with some slight configuration changes of the system itself (eg. fstab with new better performing block devices).

SCIENTIFIC LINUX / 5 x86_64 CVIVM - base SL5/64 VM template

It is a generic, possibly minimal Scientific Linux 5x (x86_64) installation, modifications:
  • CVI Hyper-V addons installed (kernel modules and some modifications, mostly for better disk and network performance, see also FAQ
  • system has custom /etc/motd and .forward, generated during startup (using script /etc/rc.local and few other custom scripts)
  • repositories:
    • default SL repositories are modified: local CERN mirrors are default location
    • CERN-only and CERN-extras (SLC repos) are added (disabled by default)
    • dag and epel repos are added
  • authentication - ask Tomasz if you need access

Filesystem layout

The filesystem layout is prepared to suit possibly all of our use cases.
  • Devices
    • /dev/hda 5GB
      • /dev/hda1 - / (root filesystem), 5 GB
    • /dev/sda 20GB
      • /dev/sda1 - /usr, 15 GB
      • /dev/sda2 - /var, 5 GB
    • /dev/sdb 4GB
      • /dev/sdb1 - system swap (4GB)
    • /dev/sdc - 40GB
      • /dev/sdc1 - LVM, volume group extravg, allocated 7GB, free 33GB; logical volumes defined and used:
        • tmplv - /tmp, 2GB
        • scratchlv - /scratch, 5GB

Hyper-V use disk images, so not all space specified above is allocated on the hypervisor. Disk space is allocated successively when VM writes data to disk (disk images grow).

The SCSI disk /dev/sdc with LVM was prepared to be eventually modified by user for some special use cases (when an extra device or space is needed). Both logical volumes existing there (tmplv, scratchlv) can be easily resized (on-line, without unmounting!) using lvextend and xfs_growfs commands (xfs filesystem!), allocating space available on the volume group. In other case also additional logical volume(s) can be created (lvcreate), formatted and used (mounted).

Performance note: SCSI devices on Hyper-V have significantly better performance (and lower CPU overhead) than IDE devices. Keep this in mind choosing a temporary place eg. for tasks like untar and compile.

IMPORTANT USAGE NOTE: filesystem layout is prepared to be flexible for many use cases, devices created are much bigger(!) than needed for most cases. Block devices in Hyper-V allocate hypervisor's disk space when disk space is really used (data is written to disk image). Please keep this in mind allocating space, and use LVM and (especially!) it's additional space only when it is really needed. Please remember that allocated hypervisor's diskspace is important factor limiting the number of virtual machines that can be deployed(!).

Security notes

  • Virtual machines have active local firewall (iptables) allowing only for ssh connections by default. Feel free to modify it. Also in case you need to setup static service with more secure configuration, central system can be used for that (the same as before for physical machines). Ask Tomasz if you need it.
  • As on VNODE - you can limit access to your virtual machine editing .ssh/authorized_keys.

SCIENTIFIC LINUX / 6 x86_64 CVIVM - base SL6/64 VM template

It is a generic, possibly minimal Scientific Linux 6x (x86_64) installation, modifications:
  • system has custom /etc/motd and .forward, generated during startup (using script /etc/rc.local and few other custom scripts)
  • repositories:
    • default SL repositories are modified: local CERN mirrors are default location
    • CERN-only and CERN-extras (SLC repos) are added (disabled by default)
    • epel repo is added
  • authentication - ask Tomasz if you need access

Filesystem layout

The filesystem layout is prepared to suit possibly all of our use cases.
  • IDE Devices (new drivers - devices sd* !)
    • /dev/sda 5GB
      • /dev/sda1 - / (root filesystem), 5 GB
    • /dev/sdb 4GB
      • /dev/sdb1 - system swap (4GB)
    • /dev/sdc 20GB
      • /dev/sdc1 - /usr, 15 GB
      • /dev/sdc2 - /var, 5 GB
    • /dev/sdd - 40GB
      • LVM, volume group extravg, allocated 7GB, free 33GB; logical volumes defined and used:
        • tmplv - /tmp, 2GB
        • scratchlv - /scratch, 5GB

Hyper-V use disk images, so not all space specified above is allocated on the hypervisor. Disk space is allocated successively when VM writes data to disk (disk images grow).

The SCSI disk /dev/sdd with LVM was prepared to be eventually modified by user for some special use cases (when an extra device or space is needed). Both logical volumes existing there (tmplv, scratchlv) can be easily resized (on-line, without unmounting!) using lvextend and xfs_growfs commands (xfs filesystem!), allocating space available on the volume group. In other case also additional logical volume(s) can be created (lvcreate), formatted and used (mounted).

IMPORTANT USAGE NOTE: filesystem layout is prepared to be flexible for many use cases, devices created are much bigger(!) than needed for most cases. Block devices in Hyper-V allocate hypervisor's disk space when disk space is really used (data is written to disk image). Please keep this in mind allocating space, and use LVM and (especially!) it's additional space only when it is really needed. Please remember that allocated hypervisor's diskspace is important factor limiting the number of virtual machines that can be deployed(!).

Security notes

  • Virtual machines have active local firewall (iptables) allowing only for ssh connections by default. Feel free to modify it. Also in case you need to setup static service with more secure configuration, central system can be used for that (the same as before for physical machines). Ask Tomasz if you need it.

Known issues

Emacs problems with fonts
If you are getting errors like
(emacs:39217): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='latin'
and/or have problems with emacs fonts (like unreadable menus) - you may need to install dejavu-lgc-sans-fonts (this is a known CentOS6 issue).

SCIENTIFIC LINUX 6 / X86_64 CVIVM OPT - base SL 6 / 64 VM template optimized (with Hyper-V device drivers)

This template provides the same SL 6/x86_64 system as the template above with following differences:
  • the system has additionally hyper-v drivers installed using following packages: kmod-microsoft-hyper-v-rhel6-60.1.x86_64.rpm, microsoft-hyper-v-rhel6-60.1.x86_64.rpm
  • Filesystem layout is a bit different concerning devices used (for details see below details about filesystem layout ).

Filesystem layout

The file system layout is very similar to layout for the base SL 6 template. The main difference is the presence of 3 new SCSI block devices (sde, sdf, sdg) which functionally replaced 3 IDE devices (sdb, sdc, sdd) for better perfomance. Disk image sizes and filesystem mapping (concerning disk space, filesystems used) remains the same as for the base SL 6 template (details below).

  • IDE Devices (new drivers - devices sd* !)
    • /dev/sda 5GB
      • /dev/sda1 - / (root filesystem), 5 GB

  • SCSI Devices
    • /dev/sde 4GB
      • /dev/sde1 - system swap (4GB)
    • /dev/sdf 20GB
      • /dev/sdf1 - /usr, 15 GB
      • /dev/sdf2 - /var, 5 GB
    • /dev/sdg - 40GB
      • LVM, volume group extravg, allocated 7GB, free 33GB; logical volumes defined and used:
        • tmplv - /tmp, 2GB
        • scratchlv - /scratch, 5GB

  • Unused IDE Devices (new drivers - devices sd* !) - devices present in the system but should not be used! (have been replaced by optimized scsi devices, devices are completely empty and their disk image files do not use any diskspace of hypervisor!)
    • /dev/sdb 4GB
      • empty, should not be used!
    • /dev/sdc 20GB
      • empty, should not be used!
    • /dev/sdd 40GB
      • empty, should not be used!

See also information about SL 6 base template and performance notes.

DEBIAN LINUX 6 / amd64 CVIVM - base Debian 6 / 64 VM template

It is a generic, possibly minimal Debian 6 (amd64) installation. Modifications:
  • system has custom /etc/motd and .forward, generated during startup (using script /etc/rc.local and few other custom scripts)
  • authentication - ask Tomasz if you need access

Filesystem layout

The filesystem layout is prepared to suit possibly all of our use cases.
  • IDE Devices (new drivers - devices sd* !)
    • /dev/sda 5GB
      • /dev/sda1 - / (root filesystem), 5 GB
    • /dev/sdb 4GB
      • /dev/sdb1 - system swap (4GB)
    • /dev/sdc 20GB
      • /dev/sdc1 - /usr, 15 GB
      • /dev/sdc2 - /var, 5 GB
    • /dev/sdd - 40GB
      • LVM, volume group extravg, allocated 7GB, free 33GB; logical volumes defined and used:
        • tmplv - /tmp, 2GB
        • scratchlv - /scratch, 5GB

Hyper-V use disk images, so not all space specified above is allocated on the hypervisor. Disk space is allocated successively when VM writes data to disk (disk images grow).

The disk /dev/sdd with LVM was prepared to be eventually modified by user for some special use cases (when an extra device or space is needed). Both logical volumes existing there (tmplv, scratchlv) can be easily resized (on-line, without unmounting!) using lvextend and xfs_growfs commands (xfs filesystem!), allocating space available on the volume group. In other case also additional logical volume(s) can be created (lvcreate), formatted and used (mounted).

IMPORTANT USAGE NOTE: filesystem layout is prepared to be flexible for many use cases, devices created are much bigger(!) than needed for most cases. Block devices in Hyper-V allocate hypervisor's disk space when disk space is really used (data is written to disk image). Please keep this in mind allocating space, and use LVM and (especially!) it's additional space only when it is really needed. Please remember that allocated hypervisor's diskspace is important factor limiting the number of virtual machines that can be deployed(!).

System boot sequence

Debian 6 by default uses dependency boot startup sequence. Briefly - tools like insserv and chkconfig should be used for configuration (links in /etc/rcN.d/ are ignored!). Note that additional init scripts should be LSB init scripts.

DEBIAN LINUX 6 / amd64 CVIVM OPT - base Debian 6 / 64 VM template optimized (with Hyper-V device drivers)

This template provides the same Debian 6/amd64 system as the template above with following differences:
  • the system has dedicated kernel, configured and compiled for optimal usage under Hyper-V hypervisors
    • kernel version: check $ uname -a (currently: 2.6.39.1, vanilla kernel, no additional patches!), kernel is newer than standard Debian 6 kernels (which do not contain Hyper-V drivers!)
  • OpenAFS - packages: openafs-client, openafs-krb5, openafs-modules-dkms are from Debian 'testing' (currently version 1.6.0), the kernel modules for OpenAFS from Debian stable does not compile with newer kernels (probably due to some kernel interface changes)
  • Filesystem layout is a bit different concerning devices used (for details see below details about filesystem layout).

Filesystem layout

The file system layout is very similar to layout for the base Debian 6 template. The main difference is the presence of 3 new SCSI block devices (sde, sdf, sdg) which functionally replaced 3 IDE devices (sdb, sdc, sdd) for better perfomance. Disk image sizes and filesystem mapping (concerning disk space, filesystems used) remains the same as for the base Debian 6 template (details below).

  • IDE Devices (new drivers - devices sd* !)
    • /dev/sda 5GB
      • /dev/sda1 - / (root filesystem), 5 GB

  • SCSI Devices
    • /dev/sde 4GB
      • /dev/sde1 - system swap (4GB)
    • /dev/sdf 20GB
      • /dev/sdf1 - /usr, 15 GB
      • /dev/sdf2 - /var, 5 GB
    • /dev/sdg - 40GB
      • LVM, volume group extravg, allocated 7GB, free 33GB; logical volumes defined and used:
        • tmplv - /tmp, 2GB
        • scratchlv - /scratch, 5GB

  • Unused IDE Devices (new drivers - devices sd* !) - devices present in the system but should not be used! (have been replaced by optimized scsi devices, devices are completely empty and their disk image files do not use any diskspace of hypervisor!)
    • /dev/sdb 4GB
      • empty, should not be used!
    • /dev/sdc 20GB
      • empty, should not be used!
    • /dev/sdd 40GB
      • empty, should not be used!

See also information about Debian 6 base template and performance notes.

ETICS WN / SL5 X86_64 EU

This is the same image as used by the etics build machines, but Condor has been disabled (so that they don't interfere with the ETICS infrastructure

SLC (Scientific Linux CERN) templates

These templates are provided by CVI team as part of Central Service. Please note that these template have :
  • a number of disadvantages:
    • these templates are relatively large, include a lot of software such X-Window system, office software (OpenOffice), internet browsers etc.
    • may use memory not in optimal way - eg, running X-Window server (not needed in most cases on a VM...) consumes a lot of memory; you may stop x-server doing $ telinit 3 or editing /etc/inittab and setting default runlevel (line with id:*number*:initdefault:) to 3 (instead of 5), and restarting the system
    • have fixed, possibly not very flexible filesystem layout
    • support from our side is limited - images are maintained by CVI team and we cannot change them
  • some advantages:
    • they are well (better than our SL templates) integrated with CERN infrastructure:
      • kerberos authentication to your own accounts, with home directories on AFS (like lxplus)
      • easier access to CERN services (like printers, phonebook etc).
(For certain usecases disadvantages may become the opposite, the same with advantages - so think of it just as list of important differences in comparison to IT/GT SL/Debian/... templates).

VM Console access

VMM web interface

Selecting a machine gives access to "Console" button. Unfortunately it is useful only for windows machines...

Windows SCVMM application

There is a windows terminal server (cerntsvmm.cern.ch) with SCVMM (System Center Virtual Machine Manager) application installed. This application allows for direct access to console of your virtual machine.

The terminal server can be accessed with xfreerdp or rdesktop eg.: $ xfreerdp -g 1280x970 cerntsvmm.cern.ch -d CERN -u username (in case of login problems using rdesktop try doing: $ rm ~/.rdesktop/licence.[hostname] and retry)

SCVMM application may require specifying server to connect (doing manually File -> "Open new connection" you have to provide it...). You should specify there: cernvmm:8100

IMPORTANT NOTICE: SCVMM is so far the most advanced tool for managing the machines, it allows starting/stopping VMs, reconfiguring all VM parameters etc. It can be used for many administrative tasks, but it should not be used for this purpose unless you really know what you are doing. Eg. even if you will be allowed to change name etc. - no changes(!) will be done in Network Database(!). Please use web interface for this (unless you are really sure about what you are doing!).

Command-line client

For GUI-haters there is a command-line client - you can download and try the current version (sometimes there is also newer, less tested version available). For usage please consult README.txt first as well as program help:
  • $ cvicli -h or
  • $ cvicli COMMAND -h
and if something is not clear - please let us know, we will extend this section!

Installation notes:

  • you may need to install python-suds, python-httplib2 and python-kerberos (other modules should be available in generic python installation)

Troubleshooting:

  • PKI (user certificate) auth. is not working
    • it is indeed... the CVI webservice so far just does not allow for this (please ignore this option of cvicli, it will be removed/marked as "not available" in next version). Please note you can use Kerberos authentication instead (--krb option).

Contact information

E-mail: it-gt-cvi-admin@cernSPAMNOT.ch

-- AndrewElwell, TomaszWolak - 27-Jan-2011

Edit | Attach | Watch | Print version | History: r33 < r32 < r31 < r30 < r29 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r33 - 2012-06-26 - TomaszWolak
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ITGT All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback