This Wiki is the old Xen Wiki, which is now read-only!
Please go to wiki.xen.org for the live version!
Xen 
 
Home Products Support Community Blog
 
   

Fedora 13/14 Xen 4.0 Tutorial

This is a step-by-step tutorial how to install Xen hypervisor 4.0.1 and the long-term maintained Linux pvops dom0 kernel 2.6.32.x on Fedora 13 (x86_64) Linux. As a default Fedora 13 includes Xen 3.4.3 RPMs, but this tutorial explains how to install the newer Xen 4.0.1 version from src.rpm package. Pvops dom0 kernel will be fetched from git repository and compiled from sources. You'll also make your F13 system ready for compiling Xen from sources and doing Xen development and testing.

Note that this tutorial disables various security features to make sure everything works well without unexpected problems! After getting everything to work OK you should enable SElinux, iptables firewall etc. Follow this tutorial step-by-step and you'll get a working system.

The steps below also work for Fedora 14 (As of 4th Feb 2011). Fedora 14 includes Xen 4.0.2 rpm binaries in the default repositories. Fedora 15 includes Xen 4.1.1 rpm binaries in the default repositories.

Hardware used in this tutorial:

  • Intel Core2 Quad CPU.
  • 8 GB of RAM.
  • SATA harddisk (AHCI mode).
  • DVDROM drive.
  • Intel NIC (e1000), DHCP for Internet access.

For generic information about Xen 4.0 release please see Xen4.0 wiki page.

This tutorial is verified to work on 30th of October 2010.

Fedora 13 installation

Download 64bit Fedora 13 x86_64 install CD (1/5) or DVD. Burn it to CDR/DVDR. I used the CD1 method.

  • Boot your computer from the CD or DVD.
  • If booting from CD1: When the Fedora bootloader starts, press TAB to enter additional boot options and add "askmethod" option to install from network URL (http/ftp mirror).
  • Install Fedora in an usual way.
  • Note about a bug in the F13 installer: After selecting "Basic Storage Devices" for installation and clicking Next the installer stalls/hangs for many minutes.. just wait patiently and it'll continue. After that the "Finding storage devices" window pops up and that also takes a long time before it continues.. this probably only happens on certain hardware configurations.
  • Note about disk partitioning: Make /boot partition the primary (first) partition and choose the filesystem type as "ext3" (not "ext4" - which is the default), and make it big enough, say 2 GB, to fit all the development debug-enabled kernels and big initrd-images caused by debug-enabled kernel modules. Then as a second partition create LVM PV (Physical Volume) and create LVM Volume Group on it. Then create your root (/) partition on the volume group. It should be at least 40 GB to fit all the development tools and source trees. I used "ext4" for the root filesystem. Create your swap partition as an LVM volume aswell.

  • Important note about LVM volume group setup: You must (should) leave free space in the LVM volume group for storing guest virtual disks!!

  • See this F13 installer screenshot for disk partitioning and LVM setup example:
  • f13-installer-partitions-for-xen

  • I set the hostname to be "f13.localdomain". This hostname is needed later on in this tutorial to fix "/etc/hosts" file contents.
  • Choose the "Minimal" installation method and "Customize Later". All the required software will be installed after the initial installation. There's no need to add additional software repositories during the installation.
  • When the installation is done reboot the computer and wait for Fedora to start up.

Configuration after installation

This step contains some common settings to configure in the newly installed system.

After the installation login as "root" from the console.

Enable automatic start of networking and start the network (it's disabled as a default):

# chkconfig network on
# /etc/init.d/network start

After starting the network you can log in from the network using ssh, if you prefer remotely configuring and setting up things. Use "ifconfig" to check the IP of the newly installed system (if using dhcp).

Then we continue and install some commonly used and needed tools:

# yum install screen vim wget tcpdump ntp ntpdate man smartmontools ethtool

Enable and start ntpd to keep time synchronized:

# chkconfig ntpd on
# chkconfig ntpdate on
# /etc/init.d/ntpdate start
# /etc/init.d/ntpd start

Edit "/boot/grub/grub.conf" and modify "timeout=10" and comment out the "hiddenmenu" option, so it'll look like:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/mapper/vg_f13-lvroot
#          initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=10
splashimage=(hd0,0)/grub/splash.xpm.gz
#hiddenmenu
title Fedora (2.6.33.3-85.fc13.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.33.3-85.fc13.x86_64 ro root=/dev/mapper/vg_f13-lvroot rd_LVM_LV=vg_f13/lvroot rd_LVM_LV=vg_f13/lvswap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=fi rhgb quiet
        initrd /initramfs-2.6.33.3-85.fc13.x86_64.img

After fixing the timeout you're able to choose which kernel to boot during system startup. As a default (in F13) you don't get to choose the kernel - grub menu will be skipped.

Edit "/etc/selinux/config" and disable it. We want to make sure we don't get problems from too strict selinux policies at this point:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Fix "/etc/hosts" by adding an entry for the hostname you specified during installation. You'll get all kinds of weird errors if there's no hostname/fqdn entry in hosts-file:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   f13 f13.localdomain

Install "xorg-x11-xauth" to be able to use X11 forwarding over ssh session:

# yum install xorg-x11-xauth

Install all the latest Fedora package updates, security fixes etc:

# yum update

At the time of writing "yum update" needed to fetch around 85 MB of package updates from the Fedora mirrors.

Disable the Fedora default iptables firewall (make sure your network is secure, ie. your network has a firewall):

# /etc/init.d/iptables stop
# chkconfig iptables off

Disable ksmtuned so that it won't flood the console with errors (it's not compatible with Xen currently):

# /etc/init.d/ksmtuned stop
# chkconfig ksmtuned off

And at this point it's best to reboot the system, to get the newest kernel in use, and make sure everything works so far. Before booting it's good to check "/boot/grub/grub.conf" and verify correct (newest) kernel is the default and then reboot:

# reboot

After the system reboots it's good to verify the firewall got disabled properly and there are no iptables rules in use anymore:

# iptables -L -n -v
Chain INPUT (policy ACCEPT 99 packets, 11467 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain OUTPUT (policy ACCEPT 97 packets, 9805 bytes)
 pkts bytes target     prot opt in     out     source               destination

Also verify SElinux is disabled:

# getenforce
Disabled

Now all the basic setup is done and you can move forward.

Installing Xen 4.0.1 from RPMs

For Fedora 14, RPMs are pre-compiled. As such, it is possible to do yum install xen to get all the necessary xen components.

The latest and greatest updates are available for download from http://koji.fedoraproject.org/koji/packageinfo?packageID=7 In theory, the RPMs can work with Fedora 13, however, this has not been tested.

Installing packages required for compiling Xen from sources

For more information about installing Xen 4.0 from sources please see Xen4.0 wiki page.

Install required development tools and libraries:

# yum groupinstall "Development Libraries"
# yum groupinstall "Development Tools"

At the time of writing these downloads were around 100 MB and 105 MB in size.

Then install some additional packages required for building Xen from sources and running it:

# yum install transfig wget texi2html libaio-devel dev86 glibc-devel e2fsprogs-devel gitk mkinitrd iasl xz-devel bzip2-devel pciutils-libs pciutils-devel SDL-devel libX11-devel gtk2-devel bridge-utils PyXML qemu-common qemu-img mercurial

At the time of writing these downloads were around 98 MB in size.

You also need to install 32bit version of glibc-devel, it's required as well:

# yum install glibc-devel.i686

Now all the required packages are installed and you can move forward.

Building Xen 4.0.1 rpm binaries from src.rpm source package

You can get the xen src.rpm source package from Fedora koji, or any Fedora 14 (or rawhide) FTP/HTTP mirror (http://ftp.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/14/Everything/source/SRPMS/xen-4.0.1-6.fc14.src.rpm), or from: http://pasik.reaktio.net/fedora/xen-4.0.1-6.fc14.src.rpm .

Fedora Xen 4.0.1-6 rpm contains some backported bugfixes from upcoming Xen 4.0.2.

Download and install the src.rpm source package:

# wget http://pasik.reaktio.net/fedora/xen-4.0.1-6.fc14.src.rpm
# rpm -i xen-4.0.1-6.fc14.src.rpm

Then rebuild the source package to generate binary rpms:

# cd /root/rpmbuild/SPECS
# rpmbuild -bb xen.spec

After a while when the build process finishes you should see output like:

Wrote: /root/rpmbuild/RPMS/x86_64/xen-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-libs-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-runtime-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-hypervisor-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-doc-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-devel-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-licenses-4.0.1-6.fc13.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/xen-debuginfo-4.0.1-6.fc13.x86_64.rpm

Install the newly built rpms:

# cd /root/rpmbuild/RPMS/x86_64/
# rpm -Uvh *4.0.1-6*.rpm
Preparing...                ########################################### [100%]
   1:xen-licenses           ########################################### [ 13%]
   2:xen-libs               ########################################### [ 25%]
   3:xen-hypervisor         ########################################### [ 38%]
   4:xen-runtime            ########################################### [ 50%]
   5:xen                    ########################################### [ 63%]
   6:xen-devel              ########################################### [ 75%]
   7:xen-doc                ########################################### [ 88%]
   8:xen-debuginfo          ########################################### [100%]

Download or compile Linux 2.6.32.x pvops Xen dom0 kernel

For more information about pvops dom0 kernels please see XenParavirtOps wiki page.

Fedora developer M A Young is building binary "xendom0" kernel rpms for Fedora. You can get the kernel rpms from his site:

As of 14th Apr 2011, the compiled kernels were last updated on 24th March 2011, and were built for Fedora 13 according to the filename. However, they should work on Fedora 14. You can compile the kernel yourself to get the latest updates, or choose to download the kernel RPMs.

Download the kernel from xen.git and checkout the long-term maintained 2.6.32.x branch:

# git clone git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-xen
# cd linux-2.6-xen
# git checkout -b xen/stable-2.6.32.x origin/xen/stable-2.6.32.x

Note! If "git clone" fails, it's most probably caused by a network problem on your end. Some broken firewalls, nat-routers, and proxies cause problems with the git clone.

Example output from git:

[root@f13 kernel]# git clone git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-xen
Cloning into linux-2.6-xen...
remote: Counting objects: 1748126, done.
remote: Compressing objects: 100% (292844/292844), done.
Receiving objects: 100% (1748126/1748126), 359.40 MiB | 34.70 MiB/s, done.
remote: Total 1748126 (delta 1452892), reused 1733298 (delta 1439822)
Resolving deltas: 100% (1452892/1452892), done.

[root@f13 kernel]# cd linux-2.6-xen/

[root@f13 linux-2.6-xen]# git checkout -b xen/stable-2.6.32.x origin/xen/stable-2.6.32.x
Branch xen/stable-2.6.32.x set up to track remote branch xen/stable-2.6.32.x from origin.
Switched to a new branch 'xen/stable-2.6.32.x'

[root@f13 linux-2.6-xen]#

Check the latest changes in the branch (git changelog):

# git log | less

Download the reference config-file for the 2.6.32.x kernel. Also run "oldconfig" to adapt the configuration to current kernel version (if it's different):

# wget -O .config http://pasik.reaktio.net/xen/pv_ops-dom0-debug/config-2.6.32.25-pvops-dom0-xen-stable-x86_64
# make oldconfig

Note the example config-file above is DEBUG-enabled possibly causing big performance hits, so don't use it for performance testing!

Then build the kernel. Replace "4" in "-j4" by the number of physical CPU cores you have, to speed up the compilation:

# make clean
# make -j4 bzImage && make -j4 modules

After successful compilation install the the kernel modules and the kernel itself. In this example we assume the kernel version is "2.6.32.25":

# make modules_install
# depmod -a 2.6.32.25
# cp -a arch/x86/boot/bzImage /boot/vmlinuz-2.6.32.25
# cp -a System.map /boot/System.map-2.6.32.25
# cp -a .config /boot/config-2.6.32.25
# cd /boot
# dracut initramfs-2.6.32.25.img 2.6.32.25

Don't worry about the warnings from dracut. Dracut might take a couple of minutes to execute. Example dracut output:

[root@f13 boot]# dracut initramfs-2.6.32.25.img 2.6.32.25
grep: /usr/share/plymouth/themes/.plymouth/.plymouth.plymouth: No such file or directory
The default plymouth plugin () doesn't exist
[root@f13 boot]#

Prepare to reboot into Xen

And finally set up a new grub entry to boot the Xen hypervisor with the pvops dom0 kernel, by editing "/boot/grub/grub.conf", make it look like this:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/mapper/vg_f13-lvroot
#          initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=10
splashimage=(hd0,0)/grub/splash.xpm.gz
#hiddenmenu
title Fedora (2.6.33.6-147.2.4.fc13.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.33.6-147.2.4.fc13.x86_64 ro root=/dev/mapper/vg_f13-lvroot rd_LVM_LV=vg_f13/lvroot rd_LVM_LV=vg_f13/lvswap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=fi rhgb quiet
        initrd /initramfs-2.6.33.6-147.2.4.fc13.x86_64.img
title Fedora (2.6.33.3-85.fc13.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.33.3-85.fc13.x86_64 ro root=/dev/mapper/vg_f13-lvroot rd_LVM_LV=vg_f13/lvroot rd_LVM_LV=vg_f13/lvswap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=fi rhgb quiet
        initrd /initramfs-2.6.33.3-85.fc13.x86_64.img
title Fedora Xen 4.0 with Linux 2.6.32.25 pvops dom0
        root (hd0,0)
        kernel /xen.gz dom0_mem=1024M loglvl=all guest_loglvl=all
        module /vmlinuz-2.6.32.25 ro root=/dev/mapper/vg_f13-lvroot nomodeset
        module /initramfs-2.6.32.25.img

Make sure the "root=/dev/mapper/vg_f13-lvroot" parameter matches what you have for the normal Fedora kernel entries above! You need to make the "root=" parameter be correct for your setup/installation.

Verify that Xen services/daemons are properly configured to start automatically:

# chkconfig --list | grep xen
xenconsoled     0:off   1:off   2:off   3:on    4:on    5:on    6:off
xend            0:off   1:off   2:off   3:on    4:on    5:on    6:off
xendomains      0:off   1:off   2:off   3:on    4:on    5:on    6:off
xenstored       0:off   1:off   2:off   3:on    4:on    5:on    6:off

And now you're ready to boot into Xen.

# reboot

When the system restarts select the Xen entry from Grub boot menu! We didn't change the default grub entry yet.

Verifying the Xen setup after reboot

When your system is done rebooting log in as root and run the following commands to verify everything is working properly.

Xen hypervisor information:

[root@f13 ~]# xm info
host                   : f13.localdomain
release                : 2.6.32.25
version                : #3 SMP Sat Oct 30 15:24:53 EEST 2010
machine                : x86_64
nr_cpus                : 4
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 2826
hw_caps                : bfebfbff:20100800:00000000:00000940:0408e3fd:00000000:00000001:00000000
virt_caps              : hvm
total_memory           : 8190
free_memory            : 7076
node_to_cpu            : node0:0-3
node_to_memory         : node0:7076
node_to_dma32_mem      : node0:3259
max_node_id            : 0
xen_major              : 4
xen_minor              : 0
xen_extra              : .1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : unavailable
xen_commandline        : dom0_mem=1024M loglvl=all guest_loglvl=all
cc_compiler            : gcc version 4.4.4 20100630 (Red Hat 4.4.4-10) (GCC)
cc_compile_by          : root
cc_compile_domain      :
cc_compile_date        : Sat Oct 16 00:13:54 EEST 2010
xend_config_format     : 4

Xen domain (vm) list:

# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  1017     4     r-----     23.1

Make sure the "Mem" field for Domain-0 is around the same amount that you specified in grub.conf in "dom0_mem" parameter.

Dom0 Linux kernel version:

# uname -a
Linux f13.localdomain 2.6.32.25 #3 SMP Sat Oct 30 15:24:53 EEST 2010 x86_64 x86_64 x86_64 GNU/Linux

The basic setup is now done. You should now go back to the grub menu file and change the default=0 line to read default=2 (or whatever line your new entry is at) to automatically boot into Xen.

Installing libvirtd and graphical virt-manager

If you want to install new Xen guests (virtual machines) with the graphical virt-manager GUI, install it like this:

# yum install virt-manager libvirt virt-viewer

Note that libvirt (libvirtd) is also required for text-based guest VM network installations!

Verify "libvirtd" is set to automatically start so the "virbr0" bridge nat/dhcp service provided by dnsmasq works ok for guest (vm) network installations. Also start it now:

# chkconfig --list libvirtd
libvirtd        0:off   1:off   2:off   3:on    4:on    5:on    6:off

# /etc/init.d/libvirtd start

Verify there's the "virbr0" bridge and "dnsmasq" process running:

# brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.000000000000       yes

# ps aux | grep -i dnsmasq
nobody    1966  0.0  0.0  12784   708 ?        S    23:27   0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file=  --listen-address 192.168.122.1 --except-interface lo --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-lease-max=253

Verify the IP settings libvirtd/dnsmasq configured for the "virbr0" network interface:

# ifconfig virbr0
virbr0    Link encap:Ethernet  HWaddr 12:57:62:0E:3F:9E
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:933 (933.0 b)

Also verify libvirtd/dnsmasq has added the required iptables NAT rule ("MASQUERADE") to enable Internet access from the virbr0 bridge:

# iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 23 packets, 5301 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain POSTROUTING (policy ACCEPT 116 packets, 8764 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MASQUERADE  all  --  *      *       192.168.122.0/24    !192.168.122.0/24
Chain OUTPUT (policy ACCEPT 116 packets, 8764 bytes)
 pkts bytes target     prot opt in     out     source               destination

And that IP forwarding (routing) is enabled:

# cat /proc/sys/net/ipv4/ip_forward
1

Note that to run the graphical virt-manager you don't have to run X server on the Xen system (dom0), you can run virt-manager in dom0 but tunnel the X11 GUI over ssh and display the graphical tools on your remote workstation/laptop!

Using ssh X11 forwarding

Install "xorg-x11-xauth" on your Fedora 13 Xen system to be able to use X11 forwarding over ssh session from your desktop/laptop:

# yum install xorg-x11-xauth

If you're connecting from a Linux workstation/laptop enable ssh X11 forwarding like this:

# ssh -X root@<f13_host_ip>

If you're using Putty on Windows you need to enable X11 forwarding in Putty settings, and also install X-server to Windows, such as Xming, and start it before trying to run graphical applications from ssh session.

This is what you should see when logging in for the first time with ssh, when X11 forwarding is enabled in your ssh client. Note the ssh server system (Fedora 13 Xen host) needs to have "xorg-x11-xauth" rpm package installed:

Last login: Mon Aug 23 21:50:49 2010 from <your_workstation_ip>
/usr/bin/xauth:  creating new authority file /root/.Xauthority

Now you can run graphical (X11) applications and the GUI will be displayed on your local workstation/laptop X, tunneled over the secure ssh connection. Try running "virt-manager", or any other graphical (X11) tool as an example.

Installing new Xen guests using graphical virt-manager GUI

Virt-manager is not part of Xen, but it's developed by Redhat and included in Fedora, and it can be used to manage Xen hypervisor, among others. Before trying to install new guests using virt-manager make sure you read the chapter above about installing virt-manager and related packages.

We're going to use LVM volumes to store the Xen guest virtual disks. In this example we're going to install CentOS 5.5 x86 (32bit) Xen PV guest.

First verify the name of your LVM volume group. The LVM volume group was set up during Fedora 13 (Xen dom0 host) installation.

[root@f13 ~]# vgdisplay
  --- Volume group ---
  VG Name               vg_f13
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               463.75 GiB
  PE Size               32.00 MiB
  Total PE              14840
  Alloc PE / Size       2944 / 92.00 GiB
  Free  PE / Size       11896 / 371.75 GiB
  VG UUID               5dsak7-VN89-zMFT-9ZiU-XGhY-s5is-u1vCUw

In this example the LVM volume group is called "vg_f13" and it has 371.75 GB of free space in it.

Now let's create a new LVM volume to store the virtual machine virtual disk. Size of the new LVM volume will be 20 GB:

[root@f13 ~]# lvcreate -ncentos55 -L20G /dev/vg_f13
  Logical volume "centos55" created

Verify the newly created LVM volume:

[root@f13 ~]# lvdisplay /dev/vg_f13/centos55
  --- Logical volume ---
  LV Name                /dev/vg_f13/centos55
  VG Name                vg_f13
  LV UUID                dP41hL-B0MI-Fy4R-ScCI-0w7K-2cfV-ruJRG2
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             640
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

Then let's start the graphical virt-manager. First make sure your ssh X11 forwarding is working properly.

[root@f13 ~]# virt-manager &
[1] 2126

And then virt-manager window will show up on your graphical desktop.

virt-manager01

Start the installation from virt-manager:

  • Right-click "localhost (Xen)"
  • Choose "New"
  • Enter the name for the guest: centos55
  • Choose: "Network install (HTTP, FTP, or NFS)"
  • Enter the CentOS mirror URL: http://ftp.funet.fi/pub/mirrors/centos.org/5.5/os/i386/

  • Click Forward
  • Accept the default values for RAM and VCPUs (512 MB and 1 vcpu) and click Forward.
  • Choose "Select managed or other existing storage"
  • Enter the path to the LVM volume to the text field or choose browse: /dev/vg_f13/centos55
  • click Forward
  • Choose/open "Advanced options"
  • Change "Architecture" to "i686" as we're installing 32bit PAE guest VM.
  • Make sure the Network selection has "Virtual network 'default': NAT" selected (it is as a default).
  • Virt-type should be "xen (paravirt)"
  • Click Finish.
  • "Creating virtual machine" window opens, and virt-manager fetches the kernel/initrd from the mirror site.
  • New window opens with the guest VM console in it, and the CentOS5 installer starts.
  • Choose "dhcp" for IPv4 networking and you'll get private IP from the dnsmasq service running in dom0. Network connections from the guest VM will be NATed to the internet.

CentOS 5 installer starts in the text mode and asks for the language, keyboard and network settings: centos55-01

When the CentOS5 installer is running you can verify the networking by checking the bridge status:

[root@f13 ~]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  1017     4     r-----    267.8
centos55                                     1   512     1     -b----     17.9

[root@f13 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.feffffffffff       yes             vif1.0

Here you can see the guest called "centos55" has ID 1. Interface "vif1.0" (the first network interface of the guest with ID 1) is attached to the bridge "virbr0" - the bridge that dnsmasq is serving and providing DHCP server and NAT service.

You can verify the private NAT IP assigned to the "centos55" guest VM from "/var/log/messages":

[root@f13 ~]# grep dnsmasq-dhcp /var/log/messages
Sep  4 23:28:05 f13 dnsmasq-dhcp[1929]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Sep  4 23:59:44 f13 dnsmasq-dhcp[1929]: DHCPDISCOVER(virbr0) 00:16:36:d4:07:b7
Sep  4 23:59:44 f13 dnsmasq-dhcp[1929]: DHCPOFFER(virbr0) 192.168.122.144 00:16:36:d4:07:b7
Sep  4 23:59:44 f13 dnsmasq-dhcp[1929]: DHCPREQUEST(virbr0) 192.168.122.144 00:16:36:d4:07:b7
Sep  4 23:59:44 f13 dnsmasq-dhcp[1929]: DHCPACK(virbr0) 192.168.122.144 00:16:36:d4:07:b7

When the installer is finished downloading files from the mirror site it'll start the graphical phase of the installation:

centos55-02

Install CentOS 5 as usual.

Installing new Xen guests using the command line virt-install

Virt-install is not part of Xen, but it's developed by Redhat and included in Fedora, and it can be used to install new Xen guests, among others.

Check the virt-manager examples above for all the basic LVM usage tips and how to install required packages. virt-install gets installed automatically when you install virt-manager and related packages as shown in the examples above.

In this example we're going to install Fedora 13 x86 (32bit) Xen PV guest using the command line virt-install tool.

First we'll create a new LVM volume to be used as the virtual disk for the F13 Xen guest:

[root@f13 ~]# lvcreate -nf13 -L40G /dev/vg_f13
  Logical volume "f13" created

Next make sure you have the ssh X11 forwarding enabled, since the command below will open graphical VNC window for the guest console to show the graphical Fedora installer.

Start the Fedora 13 installation using virt-install. Specify the newly created LVM volume as the guest virtual disk ("-f" parameter):

[root@f13 ~]# virt-install -n f13 -r 768 --vcpus=1 -f /dev/vg_f13/f13 --vnc -p -l "http://ftp.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/13/Fedora/i386/os"

Example output from virt-install:

[root@f13 ~]# virt-install -n f13 -r 768 --vcpus=1 -f /dev/vg_f13/f13 --vnc -p -l "http://ftp.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/13/Fedora/i386/os"
Starting install...
Retrieving file .treeinfo...                                                                                 | 2.8 kB     00:00 ...
Retrieving file vmlinuz-PAE...                                                                               | 6.7 MB     00:02 ...
Retrieving file initrd-PAE.img...                                                                            |  74 MB     00:01 ...
Creating domain...                                                                                           |    0 B     00:01

And then graphical window opens where you can see Fedora 13 installer booting up.

f13-01

Select IPV4 networking with DHCP in the Fedora installer. Note about disk partitioning for the guest VM: It's good to make the "/boot" partition "ext3" to avoid problems with pygrub loading the kernel from the guest. Xen 4.0.1 properly supports ext4 /boot with pygrub, but you never know if you need to move the image to older systems lacking ext4 support.

After the installer files have been downloaded the graphical phase of the Fedora 13 installer starts:

f13-02

Install Fedora 13 as usual.

Installing Ubuntu 10.04 LTS (Lucid Lynx) Xen PV guest using the Ubuntu text installer

Ubuntu 10.04 can be installed as Xen PV guest using the default text-based installer included in the Ubuntu distribution.

First create a new LVM volume to store the guest virtual disk:

[root@f13 ~]# lvcreate -nubuntu01 -L20G /dev/vg_f13
  Logical volume "ubuntu01" created

Then download the official Ubuntu Xen guest configuration file:

[root@f13 ubuntu]# wget http://fi.archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/xen/xm-debian.cfg
--2010-09-05 01:53:38--  http://fi.archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/xen/xm-debian.cfg
Resolving fi.archive.ubuntu.com... 130.230.54.102, 2001:708:310:54::102
Connecting to fi.archive.ubuntu.com|130.230.54.102|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7618 (7.4K) [text/plain]
Saving to: “xm-debian.cfg”
100%[======================================>] 7,618       --.-K/s   in 0.008s
2010-09-05 01:53:38 (911 KB/s) - “xm-debian.cfg” saved [7618/7618]

And rename it to "ubuntu01.cfg":

[root@f13 ubuntu]# mv xm-debian.cfg ubuntu01.cfg
[root@f13 ubuntu]#

Then edit "ubuntu01.cfg" with your favourite text editor and make it look like this (among other stuff in it):

memory = 1024
name = "ubuntu01"
vcpus = 1
vif = ['mac=00:16:36:64:3d:f3,bridge=virbr0']
disk = ['phy:vg_f13/ubuntu01,xvda,w']

Modify the mac address to be unique.

Then find a line in "ubuntu01.cfg" that says "bootloader=pygrub" and add proper path ("/usr/bin/pygrub") to it:

if not xm_vars.env.get('install'):
    bootloader="/usr/bin/pygrub"
else:

Already modified configuration file is available as a reference from: http://pasik.reaktio.net/fedora/f13xen4tutorial/ubuntu01.cfg .

Then start the Ubuntu installer:

xm create -f ubuntu01.cfg -c install=true
install-kernel="http://fi.archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/xen/vmlinuz"
install-ramdisk="http://fi.archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/xen/initrd.gz"
install-mirror="http://fi.archive.ubuntu.com/ubuntu"

All of the above command needs to be on a single line. Replace the mirror site URLs with your local mirror.

Ubuntu 10.04 text installer starts:

ubuntu01

Install as usual. Choose DHCP for networking.

ubuntu02

ubuntu03

ubuntu04

ubuntu05

When the installation finishes the Ubuntu guest VM will shut down.

After installation you can start the Ubuntu guest like this:

xm create -f ubuntu01.cfg -c

First you'll see the pygrub menu which allows you to choose which Ubuntu kernel to boot, and then you'll get to the normal Xen PV guest text console and see the Ubuntu kernel booting. You can exit from the console by pressing ctrl+] or ctrl+5.

End of the tutorial.

Fedora13Xen4Tutorial (last edited 2011-08-30 06:23:21 by PasiKarkkainen)

 
   
   
Citrix