#linuxcon Europe 2012

Conference

This year Linuxcon Europe took place in Barcelona in a nice venue (Hotel Fira Palace) close to plaza Espana. The regitration took few seconds and was open on Sunday afternoon, very handy to avoid Monday morning long queue. The main attendees event took place in Casa Batllo , an house design by Gaudi. http://fr.wikipedia.org/wiki/Casa_Batll%C3%B3 A lot of Redhat staff were present and we could discuss of the future products a bit. Some RHEL7 rumors : based on a mix of Fc17/18, pacemaker default HA, availability beta march (may have changed with FC18 delays), LXC

The main sponsors were showing their cloud and virtualization products:

  • Cloudstack which release their first Apache Fondation baked release durinf the conference Cloudstack 4.0
  • Redhat with Ovirt / RHEV 2.0
  • Intel : compilers and perf tools.

Keynotes

Advancing the user experience - Mark Shuttleworth

Interesting talk on Ubuntu involvment in clouds/mobile platform. In conclusion the goal is to have 1 security update for all platform, convergence.

- UX in cloud/client
Many choice to developers.
Major transition, shift from scale up to scale out.
volume > server perf
ARM and x86 for server market.
Openstack, create IaaS
Ubuntu pioneer in Openstack, Focus on developers, 

Juju crowdsourced operational handbook.
Charms, drag and drop -> better UX
OP: puppet,chef reuse.
DEV: develop faster, deploy everywhere, metal or virtual.
Economics of cloud = cheaper os.

- Phone to PC
Research. Design. Refine.
Only device, we will need.
- Virtual ecosystem
Cost of virtul vs PC.
Thin client, repurpose PCs
Webapps integrated with desktop. Contribute integration of webapps. 
Chromeless app windows ?
Deploying 1 security update to all devices :-)

KeyNote: Evernote & Cloud - Why we do not use Cloud - Dave Engberg

Very interesting point on when you shouldn't use the cloud and interesting TCO analysis. It worth to add that the staff cost is not really included, but that's part of the choice to have in-house staff.

34 millions account
1.13 billion notes
11 billion unique user / month
400 Redhat Linux Servers
Cloud good for cpu scaling, bandwidth, latency
Using cloud for software updates.
CPU not critical,
ACID DB: <10TB riops 250 wiops 50
REaltime <10TB search lucene ropis/w 800/500
Attachement storage 380TB de-depu 3x copies webbdav
What cloud is not for: storage.
DEBIAN
SSD, supermicro 1U 2xL5630, 96RAM 6x 300GB intel SSD LSI RAID5 $8000
4U 3TB x 24, LSI RAID6 $12000, 1 x L5630 frontend, metadata 200.000 user/2physical 

Net: $4000/month
Comparing to the cloud:
$13700

Keeeping machine for 4 years.
Compute: 2500 riops(sysbench)/vm $8000 440 GB usable/ month
277 machine cost 58k / month
Comparing to the cloud:
277 instance: 126k / month
500 instance: 228k / month (iops equivalent)

Storage: 54TB $12100
380TB $7.2k /month
Amazon: $37.6k / month
$1 million over 4 years.
Justify on paper why.

Open Source cloud platform Eucalyptus - Martens Mickos

Presentation of Eucalyptus approach
Ex Mysql boss
Inovation in a new category
opennebula/opensatck/cloudstack/eucalyptus(acronym)
moving between clouds
public private hybrid mobile
Be able to trust it
Amazon is starbuck E is an expresso machine.
100 vm on 100 nodes in 90sec
Agility before size
Depth before breadth
Apps before ops
AWS, join and not fight
Simplicity better than complexity.
Open source in the lead, support from commercial player Citrix , Amazon, etc..

Growing an Open Source Community - Monty Taylor HP

Openstack Community:
Linux of the Cloud.
Risk of Differentiation 
6 months 291 developers 132 companies 
Weekly open meeting. 
Meritocracy. Code standard. Automated testing. => goal automated Everything

Where Linux is going - Linux Torvalds / Dirk Hohndel

20 YEARS OF LINUX
The highlights: olpc, 
Award ? 
Kernel define by hardware + software
defined by users, solving the problems in companies
projects are not.
projects: git, 

future? kernel community like arguing, legal situation, patents(SCO), 
hardware companies ? interfaces changes ? dev between 1 and 10 years = static software.

scheduler ? cgroups, speciliazed system = not the same day to day testing, 

embedded people not supported by kernel people. ?

OS innovation ? new linus ? Old ways are the correct way. Work well. Details/perf right. Giving interfaces to the application. 
Innovation on top of OS & hardware.

Remove unused feature ? Random person gonna fix it. New developement. Process to remove code exists but not used that much.
Android fork ? WE need this that, Needed extension to the kernel, but not well designed, dsicussion about it.
Trying to convince google people, but we decided to merge the code even if it was not perfect.

Concern about kernel maintainer are getting older, high barrier of entry for new people. ? new people are coming. Situation has changed more professional than student.

Power usage more important ? perf vs power not versus = compromise. device driver need special care. HW manufacturer very conscious about power.

So where are we going short term ? 
3.7 AARCH64 - reminder kernel dev time based. when ready merged.

My Highlights

Openstack CI - James E. Blair (HP)

  • Very interesting talk on integration/testing.
How to use to use virtual machine for CI
DevStack-gate
Porblems:
  slow
  cloud unreliable
  github unreliable
  pypi is unreliable
  distro mirror are unreliable
  network access in general
A lot of time to adapt to failure.
Developers could not trust the test process

Solution:
  create a node
  pre-fetch packages
  snapshot to cloud image
  maintain a pool of cloud nodes
  slave can only be used for one test run
  python scripts triggered by jenkins.

Launching nodes in continuous mode
Test the node
  Add to jenkins as a slave
  node-lable to recognize them
  keep node db with status new ready used error delete
TODO: check node are still viable.

Jenkins decide to run a node:
  mark node as used
  change teh label'-used
  run the tests
  test timeout
  mark node for deletion in DB

Deletion:
  identify
  delete it
  remove from DB (because deletion can fail)

jclouds-plugins : (FUTURE pre creation, pooling features) single use slaves
Try to get it as a ¨build step¨
Jclouds config (screenshot)

Snapshots discussion - Kashyap Chamarthy

Ovirt

  • oVirt - Itamar Heim (Red Hat)
Alternative to vCenter/vSphere
oVirt node
oVirt Engine
UI in java, cli, api
live migration
RHEV is oVirt for Entreprise
Easy to use.
GUI to do it all.
SPICE support.
User portal.
Gluster only mode for 3.3.
  • A lot of talks on different Ovirt subject I will add it when available.

Gluster Worshop (Day 4)

  • Nice workshop, a bit too long (full day).
  • Gluster for Virtual Machine is not yet where we would like it to be. Even if 3.4 I doubt we can use it for production, but it will be an alternative for cattles.
  • Presentation List :http://www.gluster.org/community/documentation/index.php/Presentations

Optimizing FS perf when memory is tight - Theodore Ts´o (Google)

Optimizing ext4 for low memory env,
Status of ext4:
  stable in most common config
  ext4 becoming defualt in main distrib fedora redhat etc... not remove ext2/3 from kernel
  Punch sys call added (virtualization emulate the effect of trim/discard call)
  Metadata checksumming (fix fsck for some use case, ignore wrongly written block, FUTURE: this block is not good try another)
  Online resizing > 16TB filessytem (kernel + e3fs prog)

Modern filesystem
  directio, read allocation
  Easy  <-> complexity
  Snapshoting is planned at block level
  Usersapce utilities are very mature
  Journal Block Layer (OCFS2 use it)

Incremental model
 disavantages: fixed inode table
 bitmap block allocation
 32 bit inode number
RAID support is weak
 xfs is better
Lack of sexy features
 compression
 fs level snapshot thin provisioned snapshots instead
 FS aware RAID and LVM
  
Default for Desktop / Server
 distribution may evolve but not yet F19
 android as well MMC devices. hope in F2FS
 cloud storage server: hadoopfs with ext4 collaboration with 
 
Retroscpective: grid/utility/cloud ssdd
Challenges:
  economics: true ? 
  security: 
  usuability:
  more efficient: not using ressources in 
  pack a lot of jobs onto smaller servers  Virt / Container
    critical memory: few slots and expensive.

Restricted memory means less caching available
Benchamarking not useful without real load

Slides

Raw Notes

Day 1
-----

New members. 
New paper.

* Keynote: Advancing the user experience - Mark Shuttleworth
- UX in cloud/client
Many choice to developers.
Major transition, shift from scale up to scale out.
volume > server perf
ARM and x86 for server market.
Openstack, create IaaS
Ubuntu pioneer in Openstack, Focus on developers, 
Juju crowdsourced operational handbook.
Charms, drag and drop -> better UX
OP: puppet,chef reuse.
DEV: develop faster, deploy everywhere, metal or virtual.
Economics of cloud = cheaper os.
- Phone to PC
Research. Design. Refine.
Only device, we will need.
- Virtual ecosystem
Cost of virtul vs PC.
Thin client, repurpose PCs
Webapps integrated with desktop. Contribute integration of webapps. 
Chromeless app windows ?
Myth: deploying 1 security update to all devices :-)

* KeyNote: Evernote & Cloud - Dave Engberg
Why we do not use Cloud.
34 millions account
1.13 billion notes
11 billion unique user / month
400 Redhat Linux Servers
Cloud good for cpu scaling, bandwidth, latency
Using cloud for software updates.
CPU not critical,
ACID DB: <10TB riops 250 wiops 50
REaltime <10TB search lucene ropis/w 800/500
Attachement storage 380TB de-depu 3x copies webbdav
What cloud is not for: storage.
DEBIAN
SSD, supermicro 1U 2xL5630, 96RAM 6x 300GB intel SSD LSI RAID5 $8000
4U 3TB x 24, LSI RAID6 $12000, 1 x L5630 frontend, metadata 200.000 user/2physical 

Net: $4000/month
Comparing to the cloud:
$13700

Keeeping machine for 4 years.
Compute: 2500 riops(sysbench)/vm $8000 440 GB usable/ month
277 machine cost 58k / month
Comparing to the cloud:
277 instance: 126k / month
500 instance: 228k / month (iops equivalent)

Storage: 54TB $12100
380TB $7.2k /month
Amazon: $37.6k / month
$1 million over 4 years.
Justify on paper why.

* Virtual machine snapshot - Kashyap Chamarthy
Disk image file basics
RAW : IO
QCOW2 : allocated as needed.  base image + overlay thin provisionning snapshot
Overlay record the differences from base image.
Internal Snapshot:
disk snapshot
system checkpoint
External Snapshot
disk snapshot
system checkpoint
VM State
External Snapshot
overlays for live backups.
blockpull merges data from base into top
blockcommit merges data from top into base
Only RO images can be merge.
Demo qemu-kvm
qemu 1.0.3 --backin-chain.
libvirt 1.0
Conclusion:
External snapshots in libvirt
snapshot revert/delete improvement for external snap
Live/Offline blockcommit enhancements
Qemu Live blockcopy (storage migration)
https:/kashyapc.wordpress.com
Availability in RHEL ?

Day2
----
* Open Source cloud platform Eucalyptus - Martens Mickos 
Ex Mysql boss
Inovation in a new category
opennebula/opensatck/cloudstack/eucalyptus(acronym)
moving between clouds
public private hybrid mobile
Be able to trust it
Amazon is starbuck E is an expresso machine.
100 vm on 100 nodes in 90sec
Agility before size
Depth before breadth
Apps before ops
AWS, join and not fight
Simplicity better than complexity.
Open source in the lead, support from commercial player Citrix , Amazon, etc..

* Growing an Open Source Community - Monty Taylor HP 
Linux of the Cloud.
Risk of Differentiation 
6 months 291 developers 132 companies 
Weekly open meeting. 
Meritocracy. Code standard. Automated testing. => Everything


* Linux Container / LXC - 
RHEL - TEchnology preview 
Support for RHEL 7
lxc-* lxc-clone LVM BtrFS
lxc-checkconfig
lxc template scripts
criu.org/lxc
Recommendation: 
 libvirt
 1.0.0 API freeze planned
 root access in LXC is dangerous.
 Selinux ready on few distribution.
Deactivate kernel logging
Openvz ?

* Openstack CI - James E. Blair (HP)
Interrelated Integration Testing
DevStack-gate

Problems:
  slow
  cloud unreliable
  github unreliable
  pypi is unreliable
  distro mirror are unreliable
  network access in general
A lot of time to adapt to failure.
Developers could not trust the test process

Solution:
  create a node
  pre-fetch packages
  snapshot to cloud image
  maintain a pool of cloud nodes
  slave can only be used for one test run
  python scripts triggered by jenkins.

Launching nodes in continuous mode
Test the node
  Add to jenkins as a slave
  node-lable to recognize them
  keep node db with status new ready used error delete
TODO: check node are still viable.

Jenkins decide to run a node:
  mark node as used
  change teh label'-used
  run the tests
  test timeout
  mark node for deletion in DB

Deletion:
  identify
  delete it
  remove from DB (because deletion can fail)

jclouds-plugins : (FUTURE pre creation, pooling features) single use slaves
Try to get it as a ¨build step¨
Jclouds config (screenshot)


oVirt - Itamar Heim (Red Hat)

Alternative to vCenter/vSphere
oVirt node
oVirt Engine
UI in java, cli, api
live migration
RHEV is oVirt for Entreprise
Easy to use.
GUI to do it all.
SPICE support.
User portal.
Gluster only mode.

* Linux @ Intel - Imad Sousou 

Speed of inovation
Moore law reminder / comp
wayland
DLNA dLeyna
HTML5 api perf and power. Collab with W3C



* Suse Forward Looking Developement - Ralf Flaxa
21 years no linux business: serial driver 0.0.2 Ted.
FTP email annd newsgroup
Energy of a group
1994 Heidelberg
http://en.opensuse.org/Portal:Tumbleweed


Day 3
-----

*Open Hardware - Catarina Mota

History of free hw
IP Obesity / sparkfun
3d printer
Arduino 2005
global village construction set
OSHW definition 1.0 2011
OSHWA association 2012
Batch PCB crowd sourcing  

* Where Linux is going - Linux Torvalds / Dirk Hohndel
20 YEARS OF LINUX
The highlights: olpc, 
Award ? 
Kernel define by hardware + software
defined by users, solving the problems in companies
projects are not.
projects: git, 

future? kernel community like arguing, legal situation, patents(SCO), 
hardware companies ? interfaces changes ? dev between 1 and 10 years = static software.

scheduler ? cgroups, speciliazed system = not the same day to day testing, 

embedded people not supported by kernel people. ?

OS innovation ? new linus ? Old ways are the correct way. Work well. Details/perf right. Giving interfaces to the application. 
Innovation on top of OS & hardware.

Remove unused feature ? Random person gonna fix it. New developement. Process to remove code exists but not used that much.
Android fork ? WE need this that, Needed extension to the kernel, but not well designed, dsicussion about it.
Trying to convince google people, but we decided to merge the code even if it was not perfect.

Concern about kernel maintainer are getting older, high barrier of entry for new people. ? new people are coming. Situation has changed more professional than student.

Power usage more important ? perf vs power not versus = compromise. device driver need special care. HW manufacturer very conscious about power.

So where are we going short term ? 
3.7 AARCH64 - reminder kernel dev time based. when ready merged.  

* Optimizing FS perf when memory is tight - Theodore Ts´o (Google)
Optimizing ext4 for low memory env,
Status of ext4:
  stable in most common config
  ext4 becoming defualt in main distrib fedora redhat etc... not remove ext2/3 from kernel
  Punch sys call added (virtualization emulate the effect of trim/discard call)
  Metadata checksumming (fix fsck for some use case, ignore wrongly written block, FUTURE: this block is not good try another)
  Online resizing > 16TB filessytem (kernel + e3fs prog)

Modern filesystem
  directio, read allocation
  Easy  <-> complexity
  Snapshoting is planned at block level
  Usersapce utilities are very mature
  Journal Block Layer (OCFS2 use it)

Incremental model
 disavantages: fixed inode table
 bitmap block allocation
 32 bit inode number
RAID support is weak
 xfs is better
Lack of sexy features
 compression
 fs level snapshot thin provisioned snapshots instead
 FS aware RAID and LVM
  
Default for Desktop / Server
 distribution may evolve but not yet F19
 android as well MMC devices. hope in F2FS
 cloud storage server: hadoopfs with ext4 collaboration with 
 
Retroscpective: grid/utility/cloud ssdd
Challenges:
  economics: true ? 
  security: 
  usuability:
  more efficient: not using ressources in 
  pack a lot of jobs onto smaller servers  Virt / Container
    critical memory: few slots and expensive.

Restricted memory means less caching available
Benchamarking not useful without real load
    
* Gluster/Ceph - 
Compare

Day 4:
------

*Gluster Community - John Marker 
Simple economics:
  Commoditization
  less cost
  scalability
Invention
Storage should be simple. Data should not be an hostage

No single point of failure
sync and async replication
data overlay
ext4 xfs btrfs
Good: Media, Shared Storage, Big Data, Objects.
News:
 libgfapi
 fuse
 translator
>>

* GlusterFS in HA Clusters.
Pacemaker far beyond the rest :-)
pacemaker for auto recovery because of init system limitation. good idea
@bartaz
shellinabox
github lceu2012


* GlusterFS, oVirt, - Veejay 
GlusterFS client GlusterFS NFS 
-> qemu 1.2 and 3.3



Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2012-11-21 - ThomasOulevey
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback