KVM Parametrisation


Look closer into performance issues with KVM. Get hands on a RH6 beta, check benchmarking, check kernel options, check things like cpu affinity, and eventually as well take a vanilla kernel and a upstream KVM and see if that helps.

Requirements: Knowledge Linux OS and Virtualisation technology (KVM hypervisor)

Data sources

Information Source Description


Current tools

Tool Type Executed on Provided by

Here is a list of the tools used:

  • CPU Benchmark: HEPSPEC06
  • Disk IO Benchmark: IOZONE
  • Network Benchmark: IPERF 2.0.4


Priv.Com. with A. Chierici at HEPiX 2010/Cornell


Link to the lxcloud coordination meeting






At the end of the project we should have
  • an idea of how KVM behaves
  • a set of recommendations for tuning to get optimal CPU, I/O and networking performance

Individual tasks

  • Check performance of KVM in CPU, I/O and networking
  • Checkout possibilities to tune KVM, including testing of newer version if required.
  • Test these tuning options quantitatively.

Benchmarks on KVM

Test specifications
  • The hardware used for the benchmark test is composed of one physical machine with 8 cores Intel(R) Xeon(R) and 24GB RAM.
  • The VMs specs are: KVM VM 1 VCPU, 2GB RAM
  • The OS installed:

Hypervisor OS: SLC release 5.5 (Boron) 2.6.18-194.11.3.el5.cve20103081 x86_64

VMs OS: SLC release 5.5 (Boron) 2.6.18-194.17.4.el5 x86_64

CPU Benchmark
  • Tools: HEPSPEC06
  • Scripts: spec2k6_vm.sh

The test is designed as follow:

Part I Without optimization of KVM

1.Start 8 VMs with 1 CPU running in a single hypervisor

2.Running the script on each VM at the same time without pinning CPU;

3.Pinning CPU for each VM, then running the script on each VM.

Part II Optimization of KVM with ‘modprobe kvm-intel enable_ept=0’

1.Modify the file: /etc/modprobe.conf on hypervisor

[root@lxbsq0910 ~]# cat /etc/modprobe.conf

alias eth0 igb

alias eth1 igb

alias scsi_hostadapter ahci

options kvm-intel enable_ept=0 (add this line)

options edac_mc log_ue=1 check_pci_parity=1

2.Reboot the hypervisor

3. Start 8 VMs with 1 CPU running in the hypervisor

4. Running the script on each VM at the same time without pinning CPU;

5. Pinning CPU for each VM, then running the script on each VM.

Test Results

You can read the attach file

Disk IO Benchmark

  • Tools: IOZONE
  • Options: -Mce -I -+r -r 256k -s 8g -f /usr/vice/cache/iozone_$i.dat$$ -i0 -i1 -i2

The test is designed as follow:

1.Running 8 IOZONE processes in the hypervisor;

2.Start 8 VMs in the hypervisor and run IOZONE test at the same in each VM under Pinning and UnPinning.

Test Results

You can read the attach file

Network Benchmark
  • Tools: IPERF 2.0.4
  • Options: ‘-p 11522 -w 458742 -t 60 ‘ means TCP window size is 256KB, the test duration time is 60 secs(default is 10 secs)
  • Physical Server: lxbsq0910
  • VM Servers: vmbsq091000~ vmbsq091007
  • Client: lxvmpool005

  • Server side command:iperf -s –p PortNumber -w 458742 -t 60
  • Client side command: iperf -c ServerIP -p PortNumber -w 458742 [–P 8 ] b-t 60 (Port number should be same with the one on server side)

The test is designed as follow:

Part I

1.Set 8 parallel threads running at the same time on the client to test the hypervisor throughput performance;

2.Start 8 VMs running on the hypervisor and make them acted as servers. On the client side, I run 8 threads almost at the same time to connect each server respectively.

Part II

1.running a thread on the client to test the hypervisor throughput performance;

2.Doing 8 rounds separately. First round, start 1 process to connect 1 VM, Second, start 2 processes to connect 2 VMs respectively. Finally, start 8 processes to connect 8 VMs repectively.

Test Results

You can read the attach file

From the benchmark tests, we may draw a conclusion:

  • For CPU benchmark, the pinning gets better performance than unpinning. And the performance can be improved 6%~8% by optimizing the KVM with disabling ept option. In order to get the best result, we’d better combine the pinning and disabling ept option. With this we can achieve an optimistic level that is the performance penalty is about 3%.
  • For disk IO access, the write performance penalty is about 10%. While read performance loss is about 12% which is a little more than the write one.
  • For network performance penalty in VMs is about 3%, It’s really optimistic. And the second picture shows the performance in VMs is nearly equal to the physical machine. What’s more, with 4 VMs get better value than the bare one. However, it’ s a preliminary test results, we should do more study to investigate and tune some parameters to optimize the network performance using real application.

These tests have been given us some information for the version of KVM. However, there are various kinds of VMs and kernels. Since each of them uses different type of techniques, the result might not be reproduced under other types of environment. We should do more tests to understand the performance impact of virtualization.


Week 08/11/2010

08/11/2010 - todo

-- JuanManuelGuijarro - 05-Nov-2010

Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatdocx Benchmark_on_VMs.docx r2 r1 manage 83.9 K 2010-12-10 - 17:11 QiulanHuang It's a report of benchmark on CPU, Disk IO and Network performance.
Unknown file formatpptx CPU_benchmark_test2.pptx r1 manage 135.3 K 2010-12-10 - 17:18 QiulanHuang The slides about the CPU benchmark.
Unknown file formatpptx Disk_IO_and_Network_Benchmark_on_VMs.pptx r1 manage 91.5 K 2010-12-10 - 17:19 QiulanHuang The slides about the Disk IO and Network benchmark.
Unknown file formatrar rawdata.rar r1 manage 33.4 K 2010-12-13 - 08:54 JuanManuelGuijarro Raw data of KVM performance test
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2010-12-13 - JuanManuelGuijarro
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Virtualization All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback