Consolidation of IT-GT Virtualization Services

LINKS

CVI Links:

GT Links:

Other Links:

How to connect to a machine where the System Center - Virtual Machine Manager is already installed:

  • rdesktop to cerntsvmm
  • open The manager application and select cernvmm01:8100
  • in case of login problems:
    %> rm ~/.rdesktop/licence.[hostname]
  • Retry.

WORKLOG

2011/04/18 - New hypervisors with RAID0 for ETICS

10 new hypervisors: lxbsp0731, lxbsp0732, ... , lxbsp0740 with RAID0 - activated and assigned to ETICS.

2011/03/23 - User access issues corrected

2011/03/23 - Library server backups

2011/03/07 - new IT-GT library server

2011/03/07 - Notes from image and machine template creation

(information below as well as information extracted from the rest of this page should go to a dedicated IT/GT CVI administration/HOWTO page)

Creating image from ISO
(moved here)

SL5 / x86_64 image creation
  • created with SL5/x86_64 ISO images
  • devices
    • hda: 5G (changed with M$ Windows VMM, using web interface min. disk size is 20G!), hda1 5G: root filesystem
    • hdb: 4G, hdb1: swap partition (disk image with SWAP should be always kept separate for VMs!, no backup needed etc)
  • installed minimal OS (for a generic image)
    • stripped Base installation
    • added AFS / kerberos, manually configured for CERN
  • installed CVI add-ons, for details see CERN FAQ

    • rpm taken from SLC5 cern extras repository (/etc/yum.repos.d/slc5-cern-extras.repo, http://linuxsoft.cern.ch/cern/slc5X/$basearch/yum/extras/)
    • note that the kernel in SL5/64 is slightly different than in SLC5/64, but the RPM seems to do the job properly
  • added our internal.repo, slc5-cernonly.repo, lcg-CA.repo; EPEL disabled, dag enabled
  • currently vnode-like authentication (.ssh/authorized_keys with permitted users, maybe to change for kerberos and or grant it only for responsible / owner)
  • root e-mail redirection to User and Responsible specified in Lan DB (Lan DB currently allows any registered machines to pull anonymously information about itself), custom scripts in /usr/local/sbin
  • kept default SL firewall configuration allowing only ssh access (to be kept so by default, or managed with our firewall management (by default?) )

SL5 template creation

  • use CVI SOAP method: TemplateCreateRequest
    • notes about arguments for the SOAP method:
      • VMName: name of the virtual machine from which the template will be created(!!!)
      • TemplatePath: Path on library server in VMMs directory;
        • available directories: Generic, ... ? (is there Etics?)
        • it has to be created before (and visible on Library Server, what is not the same!)
        • for it directory on the central Library Server (not IT/GT)!!!, so it requires permissions there (ask Jose for access)
      • Owner: it-gt-admin
      • HostGroups IT-GT / IT-GT/Etics, ... probably template will be accessible only in a specified group (to check!)
  • tested creating new machine from a template, notes:
    • web interface allows to create machine with larger(!) disk space (hda) than template machine (great!)
      • successfully created machine with 40G hda, while machine used to create template had 5G(!)
      • root filesystem itself has be extended, or new fs can be created and mounted (in some cases (DPM) second solution seems better), to discuss and decide how to make it the most usable and flexible for users (add automated resize or leave to user? eg. post-install script to run by user?)
    • issue with kernel - every 2 boots an error appears (check dmesg from original machine and created from the template); seems harmless but it should be confirmed...
    • TO CHECK: if available memory changes with selections from webservice or is the same as in template

28/01/2011 - Hardware requested

20 HV https://remedy01.cern.ch/cgi-bin/consult.cgi?caseid=CT0000000741483&email=jan.van.eldik@cern.ch

1 Disk Server https://remedy01.cern.ch/cgi-bin/consult.cgi?caseid=CT0000000740565&email=alberto.aimar@cern.ch

20/01/2011 - Meeting with Jan and Jose

  • The Owner is used to grant permissions so both Owner and User must be assigned to the final VM user
  • landb has a method myHostInfo which can be called without security to get information
  • we might get access to the HV to do disk operations on the template files and also to manage scheduled software updates
  • requested 20 new HV
  • Need of a disk server for the virtual library > 1TB

18/01/2011 - Memory and CPU allocation in CVI

Participants: Andres Abad Rodriguez, Jose Castro Leon
  • Memory:It is not possible to over-allocate.
  • CPU: we can over-allocate. Maximum recommended is 8 virtual CPUs per physical (or logic as they call them) CPU. This means that with our hypervisor we can run, at the same time, 64 VM. Of course the bottleneck is the memory, so in this case those 64 VM would have at maximun 256 MB of RAM.

14/01/2011

  • New host group created IT-GT\users - for virtualization on demand (VNode-like)

11/01/2011

  • New 7 hypervisors have been assigned to IT-GT
    • 2 will be allocated to ETICS
    • 2 will be allocated to testbeds
    • The remaining 3 will be used as VNode replacement and to provide service to TOM

14/12/2010

Chat with Jose Castro.
- Library server - unfortunately it is not possible to create library server just with a samba share, some M$ management software is required, so we would need to have our own Windows Server (possibly as VM on CVI) - to discuss and test

PXE installation
- installed a SL5-64 (the only SL (not -C) Linux available in aims...) virtual machine on CVI, PXE seems working properly (the same way as for physical machines)

13/12/2010 - Meeting with Jose Castro.

Participants: Jose Castro Leon, Alberto Resco Perez, Pablo Guerrero Rosel, Andres Abad Rodriguez
  • Jose informed us about dynamic multicore in the VMs. We did not know that we can, for instance, assign 4 cores to each VM, and then the hypervisor will split the real processors among them. In our case, if we select a configuration with 6 machines for hypervisor (8 real cores), we will get 1,33 real core in the worst case, so we start from a performance that can be equal or a bit better than assigning one core per VM and we can get a better one depending on the load of the rest of machines.
  • It is an internal behavior, so for the VM OS is transparent. For us this means that it will work on linux platforms. CVI manage the machines as process, so it will assign cores to the VM as it would be process.
  • It will be tested by Pablo repeating Lorenzo's tests but with 4 core machines (instead of one core).

13/12/2010 - Chat with PES Ewan Roche

  • The python tool the developed is at: /afs/cern.ch/user/v/vmmaster/bin/vmtool
  • At present the hostgroup and template are hardcoded (IT-PES-PS Service and linuxPXE) and the script expects interactive access so prompts for a username/password. This is pretty easy to change as would be making the Quattor part optional.
  • Their wiki is at https://twiki.cern.ch/twiki/bin/view/PESgroup/VirtualMachineConsolidation

09/12/2010 - CVI SOAP service checkpointing test

  • wget and the GET interface of the CVI SOAP service used
  • eticsbld user added to it-gt-admin
  • HTTP basic authentication over HTTPS used

# wget -qO- --no-check-certificate --http-user=eticsbld --http-password=******** https://vmmport01.cern.ch/vmmgtsvc/service.asmx/VMCheckPointGetAll?VMName=ldinitest

<?xml version="1.0" encoding="utf-8"?>
<ArrayOfString xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:nil="true" xmlns="VmMgtSvc" />


# wget -qO- --no-check-certificate --http-user=eticsbld --http-password=******** https://vmmport01.cern.ch/vmmgtsvc/service.asmx/VMCheckPointCreate?VMName=ldinitest\&CheckPointName=test1\&Description=ThisIsADescription

<?xml version="1.0" encoding="utf-8"?>
<string xmlns="VmMgtSvc" />


# wget -qO- --no-check-certificate --http-user=eticsbld --http-password=******** https://vmmport01.cern.ch/vmmgtsvGetAll?VMName=ldinitest

<?xml version="1.0" encoding="utf-8"?>
<ArrayOfString xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="VmMgtSvc">
  <string>test1</string>
</ArrayOfString>


# wget -qO- --no-check-certificate --http-user=eticsbld --http-password=******** https://vmmport01.cern.ch/vmmgtsvc/service.asmx/VMCheckPointRestore?VMName=ldinitest\&CheckPointName=test1

<?xml version="1.0" encoding="utf-8"?>

08/12/2010 - CVI Meeting

Participants: Jan Van Eldik, Jose Castro Leon, Lorenzo Dini, Alberto Resco Perez, Pablo Guerrero Rosel, Tomasz Wolak
  • Images creation / management
    • Library server is needed, there will be one created for IT/GT and will be managed by us, what will allow us to use any ISO image uploaded there for OS installation
    • aims / PXE provided for CERN CC can be used also to install virtual machines (to test!)
      • have to create new, clean Linux/PXE machine
      • use aims to configure it for specified machine (image selection, PXE, kickstart - as usual in case of physical machines)
  • Other that Linux/MS Windows OS (BSD, Solaris, ...)
    • can be used, but most likely with performace hit (no additional support in Hyper-V)

06/12/2010 - New performance tests on VM influence on the same HV with the new hypervisors on lxvmpool034

Time to execute N builds in parallel on N VMs from the same HV.

  • Built configuration etics_R_3_2_2_1
  • Platform: sl5_x86_64_gcc412
  • VMs: 1core 2GB RAM (unless differently specified)

Here is the script executed for the tests:

curl "http://eticssoft.web.cern.ch/eticssoft/repository/etics-client-setup.py" -o etics-client-setup
wget "http://eticssoft.web.cern.ch/eticssoft/repository/etics-client-setup.py" -O etics-client-setup
time python etics-client-setup &> etics-install.out
export ETICS_HOME=$PWD/etics
export PATH=$ETICS_HOME/bin:$PATH
etics-workspace-setup
etics-get-project "org.etics"
time etics-checkout -p default.profile="ipv6,wsi" --continueonerror --config "etics_R_3_2_2_1" --runtimedeps --verbose --noask "org.etics" &> etics-checkout.out
time etics-build -p default.profile="ipv6,wsi" --config "etics_R_3_2_2_1" --continueonerror --verbose "org.etics" &> etics-build.out

operation 1 HW (4core-8GB) VMWARE 1VM CVI 1VM CVI 2VMs CVI 3VMs CVI 4VMs VMWARE 4VMs CVI 5VMs CVI 6VMs CVI 7VMs
client install 1m14s 2m57s 2m51s 2m53s - 2m21s 2m55s - 3m22s - 2m48s 3m34s - 3m38s - 3m42s - 2m58s 6m9s - 8m43s - 5m25s - 5m29s 3m27s - 3m46s - 3m47s - 3m50s 3m41s - 3m30s - 3m42s - 3m32s - 3m25s - 3m32s 4m12s - 4m21s - 4m34s - 4m35s - 4m40s - 4m41s - 4m43s
checkout org.etics 10m25s 0 16m57s 18m50s - 18m26s 19m48s - 20m43s - 19m15s 20m56s - 20m49s - 20m24s - 21m16s 43m52s - 39m22s - 64m25s - 70m17s 23m47s - 23m58s - 24m36s - 24m49s - 25m13s 28m30s - 30m - 27m54s - 29m34s - 31m23s - 27m20s 32m26s - 33m52s - 34m33s - 35m5s - 35m41s - 35m50s - 35m52s
build org.etics 20m43s 50m47s 70m58s 73m8s - 74m24s 55m24s - 69m15s - 71m26s 69m43s - 74m - 62m26s - 80m45s 61m45 - 63m22s - 58m33s - 72m16s 69m30s - 75m37s - 77m36s - 78m50s - 71m55s 80m - 79m37s - 81m57s - 82m44s - 79m34s - 57m23s 81m21s - 76m23s - 81m45s - 80m6s - 76m23s - 84m39s - 57m18s

03/12/2010 - All VMs migrated to 5 new hypervisors

  • All 5 hypervisors have been assigned to the GT group without billing
    • Hostnames: lxvmpool030-034
  • All existing VMs have been migrated from the old evaluation HV to the new HVs
  • The new HVs are CC blades with 8 cores and 16 GB RAM, single disk

1/12/2010 - ETICS builds performance investigation

Here is a chart with the CVI performance in ETICS builds with the respect to VMWare.

On Y axe you find the time in minutes of a build. On X axe you find each build.

To understand the chart you need to compare the blue dots with the red ones within the same vertical line. Higher is slower.

You can see that the time varies a lot and it probably depends on how many builds are currently executed on other VMs within the same HV.

Overall I see that CVI is performing OK, within the range of VMWare and not affected much by other VMs as the timings are quite stable.

Queries executed to gather data:

select t1.runid, t2.runid, TIMESTAMPDIFF(MINUTE, t1.start, t1.finish) as cvi_seconds, TIMESTAMPDIFF(MINUTE, t2.start, t2.finish) as vmware_seconds, TIMESTAMPDIFF(MINUTE, t2.start, t2.finish)/TIMESTAMPDIFF(MINUTE, t1.start, t1.finish)*100 as percentage from Task as t1, Run as r1, Task as t2, Run as r2 where t1.runid=r1.runid and t2.runid=r2.runid and t1.name='remote_task' and t2.name='remote_task' and t1.host like 'etics-cvi-test%.cern.ch' and t2.host not like 'etics-cvi-test%.cern.ch' and r1.project=r2.project and r1.component=r2.component and r1.description=r2.description and t1.platform=t2.platform and t1.result=t2.result;

create view cvi_performance as select t1.runid as runid1, t2.runid as runid2, TIMESTAMPDIFF(MINUTE, t1.start, t1.finish) as cvi_seconds, TIMESTAMPDIFF(MINUTE, t2.start, t2.finish) as vmware_seconds, TIMESTAMPDIFF(MINUTE, t2.start, t2.finish)/TIMESTAMPDIFF(MINUTE, t1.start, t1.finish)*100 as percentage from Task as t1, Run as r1, Task as t2, Run as r2 where t1.runid=r1.runid and t2.runid=r2.runid and t1.name='remote_task' and t2.name='remote_task' and t1.host like 'etics-cvi-test%.cern.ch' and t2.host not like 'etics-cvi-test%.cern.ch' and r1.project=r2.project and r1.component=r2.component and r1.description=r2.description and t1.platform=t2.platform and t1.result=t2.result;

1/12/2010 - Additional questions answered by CVI

  • Are any SLAs defined for the CVI infrastructure, WA or WS?
    • We tend to discuss directly with the major customer groups, and agree Service Levels. You would be such a customer group.
  • Who are your other major users? We would like to contact them to share ideas and to avoid work duplication for the CLIs.
    • IT/PES service managers (Ewan Roche), who write python scripts against the SOAP interface, are probably most interesting for you.
    • BE/CO have started to write a Java application as well.
  • Is any documentation and/or examples available for the APIs?
    • Not much. Anything in particular you are looking for?
  • What is the average size of a paused VM in the disk? What about the checkpoints?
    • I believe this is simply the size of the VHD, which for your VMs seems to be 15 - 20 GB
  • Are paused VMs only taking disk space or they reduce the HV performance?
    • I am not aware of performance penalties.
    • But I have to add that >98% of the VMs are continuously running...
  • How difficult is it to move a paused VM into another HV? Can we do it using the HyperV Manager Windows application? via the WS/WA?
    • This should be a simple 'Migrate' operation, which could be executed via SOAP or via the web interface.
  • What are the problems in giving us direct upload access to the image repository? Will we always have to pass via you to add/remove images?
    • There are no such problems... I believe that Jose sent instructions on how to use the new TemplateCreateRequest() SOAP method yesterday

1/12/2010 - Checkpoint methods available in the CVI SOAP WS

CVI has implemented SOAP methods for checkpointing, which you can find at https://vmm.cern.ch/vmmgtsvc/service.asmx :

         public string VMCheckPointCreate(string VMName, string CheckPointName, string Description)
         public bool VMCheckPointExists(string VMName, string CheckPointName)
         public string[] VMCheckPointGetAll(string VMName)
         public string VMCheckPointDeleteAll(string VMName)
         public string VMCheckPointDelete(string VMName, string CheckPointName)
         public string VMCheckPointRestore(string VMName, string CheckPointName)

4-31/11/2010 - Testing 5 VMs etics-cvi-test[12345] in ETICS production

  • New IT-GT group and ETICS subgroup created in CVI. Manageable via e-groups.
  • New hypervisor (cernvs64) assigned to ETICS for testing (12core, 24GB RAM, RAID 10)
  • ETICS images SL5_64, SLC4_64, SLC4_32 converted to VHD and added to CVI

4/11/2010 - New CVI web-application and web-service in place for testing:

WA: https://vmm.cern.ch/vmm/default.aspx WS: https://vmmport01.cern.ch/vmmgtsvc/service.asmx

How to connect to a machine where the HyperV Manager is already installed:

  • rdesktop to cerntsvmm
  • open The manager application and select cernvmm01:8100
  • in case of login problems:
    %> rm ~/.rdesktop/licence.[hostname]
  • Retry.

10/09/2010 - Testing IT-GT-DM Builds

Component VMWare Nodes Hardware Node CVI 1GB CVI 2GB CVI 4GB
org.glite.fts 03:07:44 01:16:17 03:37:47 03:49:09 03:52:51
org.glite.data 01:48:22 00:42:10 0 0 0

10/09/2010 - CVI Meeting

Participants: Lorenzo Dini, Jan Van Eldik

STATUS
  • Tested VMs on SLC4 x86_64 and Win7. VMs behave very well
  • Still performance problems with WA and WS
  • New person working on the CVI infrastructure: jose.castro.leon@cernNOSPAMPLEASE.ch
  • A new version of CVI WS is under development, it will be available first days of October
  • Alberto Resco and Jose Castro converted VMDK -> VHD for SL5_64, SLC4_32 and SLC4_64.

Next Actions
  • Jan notifies Lorenzo once the new WS is in place to be tried and tested
  • Alberto and Jose will test the ETICS provided images from CVI webpage.
Answers to previous GT requests (see previous meeting for full question list)
BLOCKING
  • Custom images availability
    • *Alberto Resco and Jose Castro can start with the VMDK -> VHD exercise next week
    • sl5_ia32, sl5_x86_64, deb5_ia32, deb5_x86_64, slc4_ia32 and slc4_x86_64 are the platforms we need for production use
    • A minimal SLC image can be investigated (minimal set of packages to further extend during builds with YUM)

IMPORTANT

  • Performance problems
    • The new WS from beginning of October will fix all these issues

NOT PRIORITAL AT THE MOMENT

  • Checkpointing not available via WS
    • The new WS version has checkpointing available

  • WA blocks when a VM is under creation and the announcement is sent too early when the machine is not in the DNS yet.
    • This will also be fixed in the next WA version

  • e-group as VM administrator does not work.
    • An e-group can be set as responsible but not as owner

  • After checkpoint restore, it is not possible to ssh anymore using the CERN account.
    • This is due to kerberos problems with the clock. Avoid using kerberos or be sure the NTP server restarts

QUESTIONS
  • Missing Documentation
    • The new WS and WA will have more doc

  • We see two strange image names: Quattor Server and Custom Server. Are these mechanism to load custom images?
    • No, these are tricks to install machines with Castor or with a CD image

  • Do you have any plans to support other linux images? Something like SL or Debian?
    • The plan is to give full access to a custom image library to be manager by GT

  • Can a VM be made visible outside CERN if requested to netops or there are more showstoppers from your side with the respect to normal hardware nodes?
    • No more problems than an hardware machine. CS is running out of IP and that may be negotiated with them

  • We will need to add ssh public keys in the VMs to allow test scripts to access the VMs. Do you think there would be any problems with this? * No problem

  • How the Hyper-V system allocates VMs to hypervisors when a user creates a VM? Is it possible to change this behavior or to move VM between hypervisors in case there are some performance problems?
    • HyperV check the best performant hypervisor and allocate the VM there. It is possible to migrate with a short downtime of the VM

  • You told me you have user groups. By default we are in the Universe world which is used as evaluation but later we can have out group where we can upload images, manage hypervisors and add our private resources as hypervisors. How does this work? Can we test it?
    • GT would get its own group of users with its own group of images in a library and its own group of hypervisors

  • As a workaround for the 30 minutes creation time, we would need to pre-create a pool of VM for each image, snapshot them and then stop them. This will allow our command line interface to have 2 different calls to get a VM: one slow where the user actually creates a VM and assigns the hostname to it and for this he can wait 30 minutes. The second call will just restart a stopped VM from the pool and give the user the hostname. Once the user releases the VN, we will revert it to checkpoint and stop it again. In your opinion, having many VM stopped and waiting there for the user to claim them would cause a performance deterioration for the VMs running at the same time? Are the stopped VMs removed from the hypervisor or not? What happens to the VM checkpoints?
    • VMs off do not influence the hypervisor performance, they just take up space.

23/06/2010 - GT Virtualization Meeting

Participants: Lorenzo Dini, Alberto Aimar, Alberto Resco, Tomasz Wolak

STATUS
Alberto, Tomasz and myself tried the CVI interfaces to manager VMs and we played a bit with them. Creating VMs with the NICE interface and then trying to manage checkpoints with the ActiveX.

We had a look at the WS and tried to find out what are the available operations and how to script a replacement of the VNode Command Line Interface and of the ETICS submitter with some lightweight CLI client connecting to the CVI web-service.

ISSUES
BLOCKING
  • In order to have a more thorough testing, we need to create VM based on the images we use every day. This would allow us to plug to production some images and see how they behave. We would need to prepare these images so that CVI can accept them either by converting a VMWare VMDK file or XEN tar.gz or by creating a VHD image from scratch. We found online a free tool to convert from VMDK to VHD but it has to be tested. No tools found to convert from XEN tar.gz.
Then we would need to add these images to the system. Either you can do this for us (as a start) or you provide a recipe to do it ourselves. The best candidate would be a plain vanilla SL5 64bit.

IMPORTANT

  • The performance of the web-service and web-application is not good. Often is very slow or down completely and make the service quite unusable. Would it be possible to have a more reliable service?

NOT PRIORITAL AT THE MOMENT

  • The checkpoint feature is not available from the web-service nor from the web-application, would it be possible to have it implemented?
  • The creation of a VM takes around 30 minutes (including the DNS refresh) and this for our use-cases is not acceptable. We can work around this by providing the checkpoint feature in the web-service and/or web-application.
  • We tried to assign administration of a VM to an e-group but it did not work for us. Only the Owner had ssh access. We tried by both updating a running VM and by creating a VM using an e-group without success.
  • The time of the VM goes crazy sometimes. Do you know any method to make it go similar to the Hypervisor?
  • After checkpoint restore, it is not possible to ssh anymore using the CERN account.

QUESTIONS
  • Is it possible to have some documentation on the SOAP webservice? From the URL you provided me it is possible to see all the methods and the parameters but what the method actually does is not always clear.
  • We see two strange image names: Quattor Server and Custom Server. Are these mechanism to load custom images?
  • Do you have any plans to support other linux images? Something like SL or Debian?
  • Can a VM be made visible outside CERN if requested to netops or there are more showstoppers from your side with the respect to normal hardware nodes?
  • We will need to add ssh public keys in the VMs to allow test scripts to access the VMs. Do you think there would be any problems with this?
  • How the Hyper-V system allocates VMs to hypervisors when a user creates a VM? Is it possible to change this behavior or to move VM between hypervisors in case there are some performance problems?
  • You told me you have user groups. By default we are in the Universe world which is used as evaluation but later we can have out group where we can upload images, manage hypervisors and add our private resources as hypervisors. How does this work? Can we test it?
  • As a workaround for the 30 minutes creation time, we would need to pre-create a pool of VM for each image, snapshot them and then stop them. This will allow our command line interface to have 2 different calls to get a VM: one slow where the user actually creates a VM and assigns the hostname to it and for this he can wait 30 minutes. The second call will just restart a stopped VM from the pool and give the user the hostname. Once the user releases the VN, we will revert it to checkpoint and stop it again. In your opinion, having many VM stopped and waiting there for the user to claim them would cause a performance deterioration for the VMs running at the same time? Are the stopped VMs removed from the hypervisor or not? What happens to the VM checkpoints?

22/06/2010 - Virtualization Workshop

Participants: Lorenzo Dini, Jan Van Eldik, Alberto Aimar

  • Further discussion about how to work around the long waiting time for VM creation using checkpoints

23/04/2010 - CVI Demo

Participants: Lorenzo Dini, Jan Van Eldik

  • Demo on how to create and manage VMs
  • Some tries by Lorenzo in the following weeks

22/04/2010 - HEPIX Requirement Presentation

Presenter: Lorenzo Dini Participants: Jan Van Eldik, Alberto Resco, Tomasz Wolak

Andrew Elwell virtualization comparison report

09/02/2010 - First Meeting

Participants: Jan Van Eldik, Alberto Aimar, Lorenzo Dini
  • Description of ETICS and VNode virtualization solutions
  • Description of CERN Virtual Infrastructure

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng cvi-performance.PNG r1 manage 28.8 K 2010-12-10 - 10:39 UnknownUser  
PNGpng cvi-vmware-performance.PNG r1 manage 59.6 K 2010-12-06 - 14:23 UnknownUser  
Texttxt dmesg.sl5-64-orig_machine.txt r1 manage 18.6 K 2011-02-01 - 18:29 TomaszWolak dmesg - sl5 orig. VM (no error)
Texttxt dmesg.sl5_64_template.txt r1 manage 20.4 K 2011-02-01 - 18:23 TomaszWolak sl5 dmesg with kernel error
Edit | Attach | Watch | Print version | History: r34 < r33 < r32 < r31 < r30 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r34 - 2011-05-03 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ITGT All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback