Quantcast
Channel: BigSmoke » lenny
Viewing all articles
Browse latest Browse all 2

Making a Debian Squeeze machine into a Xen virtual machine host

0
0

My attempts to get Xen working right on Debian stable (Lenny) were not really successful. Xen has had some interesting developments and the version in Lenny is just too old. Plus, the debian bootstrap scripts used to create images don’t support Ubuntu Maverick… Squeeze (testing) has the newest Xen and deboostrap, so that’s cool. I used the AMD64 architecture.

First install Xen:

aptitude -P install xen-hypervisor-4.0-amd64 linux-image-xen-amd64 xen-tools

Debian Sqeeuze uses Grub 2, and the defaults are wrong for Xen. The Xen hypervisor (and not just a Xen-ready kernel!) should be the first entry, so do this:

mv -i /etc/grub.d/10_linux /etc/grub.d/50_linux

Then, disable the OS prober, so that you don’t get entries for each virtual machine you install on a LVM.

echo "" >> /etc/default/grub
echo "# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu." >> /etc/default/grub
echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub
update-grub2

Per default (on Debian anyway), Xen tries to save-state the VM’s upon shutdown. I’ve had some problems with that, so I set these params in /etc/default/xendomains to make sure they get shut down normally:

XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""

In /etc/xen/xend-config.sxp I enabled the network bridge (change an existing or commented-out line for this). I also set some other useful params (for me):

(network-script network-bridge)
(dom0-min-mem 196)
(dom0-cpus 0)
(vnc-listen '127.0.0.1')
(vncpasswd '')
(Also, don’t forget to disable ballooning and setting a max memory.)

Then I edited /etc/xen-tools/xen-tools.conf. This config contains default values the xen-create-image script will use. Most important are:

# Virtual machine disks are created as logical volumes in volume group universe (LVM storage is much faster than file)
lvm = universe
 
install-method = debootstrap
 
size   = 50Gb      # Disk image size.
memory = 512Mb    # Memory size
swap   = 2Gb    # Swap size
fs     = ext3     # use the EXT3 filesystem for the disk image.
dist   = `xt-guess-suite-and-mirror --suite` # Default distribution to install.
 
gateway    = x.x.x.x
netmask    = 255.255.255.0
 
# When creating an image, interactively setup root password
passwd = 1
 
# I think this option was this per default, but it doesn't hurt to mention.
mirror = `xt-guess-suite-and-mirror --mirror`
 
mirror_maverick = http://nl.archive.ubuntu.com/ubuntu/
 
# Ext3 had some weird settings per default, like noatime. I don't want that, so I changed it to 'defaults'.
ext3_options     = defaults
ext2_options     = defaults
xfs_options      = defaults
reiserfs_options = defaults
btrfs_options    = defaults
 
# let xen-create-image use pygrub, so that the grub from the VM is used, which means you no longer need to store kernels outside the VM's. Keeps this very flexible.
pygrub=1
 
# scsi is specified because when creating maverick for instance, the xvda disk that is used can't be accessed. 
# The scsi flag causes names like sda to be used.
# scsi=1 # no longer needed 

I created the following script to easily let me make VM’s:

#!/bin/bash
 
# Script to easily create VM's. Hardy, maverick and Lenny are tested
 
dist=$1
hostname=$2
ip=$3
 
if [ -z "$hostname" -o -z "$ip" -o -z "$dist" ]; then
  echo "No dist, hostname or ip specified"
  echo ""
  echo "Usage: $0 dist hostname ip"
  exit 1
fi
 
if [ "$dist" == "hardy" ]; then
  serial_device="--serial_device xvc0"
fi
 
xen-create-image $serial_device --hostname $hostname --ip $ip --vcpus 2 --pygrub --dist $dist

At this point, you can reboot (well, you could earlier, but well…).

Usage of the script should be simple. When creating a VM named ‘machinex’, start it and attach console:

xm create -c /etc/xen/machinex.cfg

You can escape the console with ctrl-]. Place a symlink in /etc/xen/auto to start the VM on boot.

As a sidenote: when creating a lenny, the script installs a xen kernel in the VM. When installing maverick, it installs a normal kernel. Normals kernels since version 2.6.32 (I believe) support pv_ops, meaning they can run on hypervisors like Xen’s.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images