Linux : my experience about OpenVZ and Archlinux
Some time ago (a year or so), I changed the hardware of my own server. I decided to take the opportuny to change my VM system from XEN 3 to OpenVZ (for a lots of reasons that are not in the scope of this article ;).
I finaly managed to take some time to write something about this story on my blog. This is far from being a guide : it’s more a suite of notes…
Chosen Hardware and linux distro
As I’m a big fan of Archlinux, this is the distro I choosed (simplicity, stability, up to date, …)
For the hardware, as my server is running 24/7, I wanted a quite “green” solution ;). I opted for an Atom based architecture which is powerfull enought for my own usage (3 containers) and as a limited power consumption:
- Motherboard : Asrock A330 GC including an atom 330 at 1.6Ghz (overclocked to 1.8 ghz)
- Memory : 4 Gb of RAM
- Main system disk : 8Gb compaq flash mounted on an IDE adapter (more than 40 Mb/s for a very light power usage)
- VM disks : 2.5 inch 40 Gb drive for containers (VM) and logs partition, two 3.5 inch, 5400 rpm, 2Tb for data storage and backup (soft mirrored: I hate raid as a faulty controller could make 2 unusable disks…)
- Dual gigabit network cards (bonded and dispached using software bridge between containers)
Basic container (VM) Creation
Partitions on OpenVZ Host
The host of all container use the Compaq flash for the main system disk with 3 partitions : /, /boot and /var.
The 2.5 inch 40Gb drive as one partition dedicated to /var/log (to avoid premature death of the compaq flash) and the rest of the disk is used as LMV volume for containers.
This solution as severals advantages :
- The compaq flash insure best file access for the base system (with is more important than brute speed for the container host)
- LVM for containers offer the best option (IMHO) for re-sizing and snapshot-ting containers (think about backup)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
# tmpfs for misc directories that often operate best in memory instead of disk tmpfs /tmp tmpfs nodev,nosuid 0 0 tmpfs /var/run tmpfs defaults,noexec,noatime,size=2M 0 0 tmpfs /var/lock tmpfs defaults,noexec,noatime,size=2M 0 0 # here are the compaq flash partitions. Note flash drive optimised parameters LABEL=BOOT /boot ext2 defaults,noauto,noatime 0 1 LABEL=ROOT / ext4 defaults,noatime,async,barrier=0,commit=100 0 1 LABEL=VAR /var ext4 defaults,noatime,async,barrier=0,commit=100 0 1 ########### OpenVZ dedicated partition ########## # container disks (replaced with dummy names for this article) /dev/OZDisks/xxx /vz/private/xxx ext4 defaults,noatime,async 0 1 /dev/OZDisks/yyy /vz/private/yyy ext4 defaults,noatime,async 0 1 /dev/OZDisks/zzz /vz/private/zzz ext4 defaults,noatime,async 0 1 |
Creating a template container
1 2 3 4 5 6 7 |
wget http://download.openvz.org/template/precreated/contrib/arch-2010.05-x86_64-minimal.tar.gz vzctl create 99 --ostemplate arch-2010.05-x86_64-minimal --private=/vz/private/TmplArchLinux/99 vzctl set 99 --userpasswd root: vzctl set 99 --diskspace 5G:5G --save vzctl set 99 --privvmpages 256M:256M --save vzctl start 99 vzctl enter 99 |
Then, when the container is started, I just need to ajust the hostname in /etc/rc.conf
Network settings
Bonding configuration on OpenVZ host and containers
To enabled bonding and vzbridge (for container disptching) on archlinux I had to comment out all network settings on /etc/rc.conf and set NETWORKS Variable like the following :
1 |
NETWORKS=(bonded vzbridge) |
1 2 3 4 5 |
CONNECTION="bonding" INTERFACE="bond0" SLAVES="eth0 eth1" IP="0" DHCP_TIMEOUT=40 |
For bridge setting, you first need to install bridge utils (pacman -S bridge-utils). Then the config file is :
1 2 3 4 5 6 |
INTERFACE="br0" CONNECTION="bridge" DESCRIPTION="VZ Bridge for VM" BRIDGE_INTERFACES="bond0" IP="dhcp" DHCP_TIMEOUT=40 |
At this time, the host server as a network access using DHCP with a bonded interface (2x1Gbps here).
Use bonded interface in VM + DHCP
First, you need to create the file /etc/vz/vznet.conf in order to make openvz to configure veth network using bidge-utils for containers (vm). This file just define the path to a script that come with openvz (don’t forget to set the executable bit on it):
1 2 |
#!/bin/bash EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr" |
Then we juste need to add a network interface for each container. Example for a container with ID 99 (you can set the mac adress as you wish):
1 |
vzctl set 99 --netif_add eth0,00:00:00:00:00:99,,,br0 --save |
We are almost done. We need to configure network on the container… but for dhcp to work we have to comment the 2 lines In the archlinux tempate setting that begin with ADD_IP and DEL_IP (/etc/vz/dist/arch.conf).
For final touch, on the container rc.conf file, ajust settings as the following:
- set eth0=”dhcp”
- check that “eth0” is defined in the INTERFACES variable
- check that lo=”lo 127.0.0.1″ is declared
- check that ROUTES=(!gateway)
Resources allocations for containers
Summary of OpenVZ ram parameters
Mistakes in any of openvz memory parameter can lead to bad performance and/or unexpected process kill. If I had to remember something I would say :
- Privvmempages : allocable memory (allocate != used). Barrier = limit, whereas limit is not used
- vmguardpages : garanteed memory, even if host is out of memory (and so swap begins to be used)
- oomguardpages: threshold before any process kill
- kmemsize = non swappable kernel memory used for processes (check usage thought userbean counter)
To make things clear, a process kill will occur when :
limit vmguardpages < oomguardpages + socket buffer +kmemsize
Adding peripherals
This is pretty simple : we just use the DEVNODES variable. For example adding an harddrive :
1 |
EVNODES="sdc1:rw" |
Allowing HTTPS/SSL and SSHD usage on a container
By default, containers don’t have “random” device needed for SSL to generate key (they need some entropy found in random devices). As OpenVZ is not compatible with udev, special devices must be created manualy.
So, to make apache works with HTTPS, we need to add /dev/random and /dev/urandom to the container (here, ID 99) :
1 2 3 4 |
vzctl exec 99 rm /dev/urandom vzctl exec 99 rm /dev/random vzctl exec 99 mknod /dev/random c 1 8 vzctl exec 444 mknod /dev/urandom c 1 9 |
Then, on the container configuration file, we need to add the urandom device:
1 |
DEVICES="c:1:9:rw " |
Allowing OpenVPN on a container
Yes it is possible to use a container as an OpenVPN server… this is one of the advantages OpenVZ has over other containers system (eg: lxc).
Like for SSL, we need to add some devices (here with tun. It should be similar for tap):
1 2 3 4 |
vzctl set 101 --devices c:10:200:rw --save vzctl exec 101 mkdir -p /dev/net vzctl exec 101 mknod /dev/net/tun c 10 200 vzctl exec 101 chmod 600 /dev/net/tun |
Then to allow the container to use iptables (need for openVPN routing rules), we need to set the following on the container config file :
1 |
CAPABILITY="NET_ADMIN:on " |
Finally, when OpenVPN is installed on the container, we need to add a routing rule to make the container act as a router to the lan for any connected client.
In the following example, the vpn network is 192.168.100.0 and the IP of the container is 192.168.1.3:
1 |
iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -j SNAT --to-source 192.168.1.3 |
That’s all folks !
I don’t have anything else to write. Everything is working very well since more than one year. This allow to have multiple virtual server with a minimum overhead on the server (remember that atom n330 has vm extension)
I will try to publish some of the script I used for maintenance (automatic backup of all container, automatic ssh ip ban, etc.).
The only thing which is sad is that upgrading Archlinux can be tricky sometime as the container does not run the initial init scripts of the distribution…