Book Image

Containerization with LXC

By : Konstantin Ivanov
Book Image

Containerization with LXC

By: Konstantin Ivanov

Overview of this book

In recent years, containers have gained wide adoption by businesses running a variety of application loads. This became possible largely due to the advent of kernel namespaces and better resource management with control groups (cgroups). Linux containers (LXC) are a direct implementation of those kernel features that provide operating system level virtualization without the overhead of a hypervisor layer. This book starts by introducing the foundational concepts behind the implementation of LXC, then moves into the practical aspects of installing and configuring LXC containers. Moving on, you will explore container networking, security, and backups. You will also learn how to deploy LXC with technologies like Open Stack and Vagrant. By the end of the book, you will have a solid grasp of how LXC is implemented and how to run production applications in a highly available and scalable way.
Table of Contents (17 chapters)
Containerization with LXC
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Dedication
Preface

Building containers with OpenVZ


OpenVZ is one of the oldest operating-system-level virtualization technologies, dating back to 2005. It is similar to LXC in the sense that it is geared toward running an entire operating system, rather than a single program such as Docker. Being a containerization technology, it shares the host OS kernel with no hypervisor layer. OpenVZ uses a patched version of the Red Hat kernel that is maintained separately from the Vanilla kernel.

Let's explore some of the OpenVZ features and see how they compare to LXC:

For this example deployment, we are going to use Debian Wheezy:

root@ovz:~# lsb_release -rd
Description:      Debian GNU/Linux 7.8 (wheezy)
Release:    7.8
root@ovz:~#

Start by adding the OpenVZ repository and key, then update the package index:

root@ovz:~# cat << EOF > /etc/apt/sources.list.d/openvz-rhel6.list
deb http://download.openvz.org/debian wheezy main
EOF
root@ovz:~#
root@ovz:~# wget ftp://ftp.openvz.org/debian/archive.key
root@ovz:~# apt-key add archive.key
root@ovz:~# apt-get update

Next, install the OpenVZ kernel:

root@ovz:~# apt-get install linux-image-openvz-amd64

If using GRUB, update the boot menu with the OpenVZ kernel; in this example, the kernel is added as menu item 2:

root@ovz:~# cat /boot/grub/grub.cfg | grep menuentry
menuentry 'Debian GNU/Linux, with Linux 3.2.0-4-amd64' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 3.2.0-4-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 2.6.32-openvz-042stab120.11-amd64' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 2.6.32-openvz-042stab120.11-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os {
root@ovz:~#
root@ovz:~# vim /etc/default/grub
...
GRUB_DEFAULT=2
...

root@ovz:~# update-grub
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.2.0-4-amd64
Found initrd image: /boot/initrd.img-3.2.0-4-amd64
Found linux image: /boot/vmlinuz-2.6.32-openvz-042stab120.11-amd64
Found initrd image: /boot/initrd.img-2.6.32-openvz-042stab120.11-amd64
done
root@ovz:~#

We need to enable routing in the kernel and disable proxy ARP:

root@ovz:~# cat /etc/sysctl.d/ovz.conf
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.all.rp_filter = 1
kernel.sysrq = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
root@ovz2:~#
root@ovz:~# sysctl -p /etc/sysctl.d/ovz.conf
...
root@ovz:~#

Now it's time to reboot the server, then check whether the OpenVZ is now loaded:

root@ovz:~# reboot
root@ovz:~# uname -a
Linux ovz 2.6.32-openvz-042stab120.11-amd64 #1 SMP Wed Nov 16 12:07:16 MSK 2016 x86_64 GNU/Linux
root@ovz:~#

Next, install the userspace tools:

root@ovz:~# apt-get install vzctl vzquota ploop vzstats

OpenVZ uses templates in a similar way to LXC. The templates are archived root filesystems and can be built with tools such as debootstrap. Let's download an Ubuntu template in the directory where OpenVZ expects them by default:

root@ovz:~# cd /var/lib/vz/template/
root@ovz:/var/lib/vz/template# wget http://download.openvz.org/template/precreated/ubuntu-16.04-x86_64.tar.gz
root@ovz:/var/lib/vz/template#

With the template archive in place, let's create a container:

root@ovz:/var/lib/vz/template# vzctl create 1 --ostemplate ubuntu-16.04-x86_64 --layout simfs
Creating container private area (ubuntu-16.04-x86_64)
Performing postcreate actions
CT configuration saved to /etc/vz/conf/1.conf
Container private area was created
root@ovz:/var/lib/vz/template# 

We specify simfs as the type of the underlying container store, which will create the root filesystem on the host OS, similar to LXC and the default directory type. OpenVZ provides alternatives, such as Ploop, which creates an image file containing the containers filesystem.

Next, create a Linux bridge:

root@ovz:/var/lib/vz/template# apt-get install bridge-utils
root@ovz:/var/lib/vz/template# brctl addbr br0

To allow OpenVZ to connect its containers to the host bridge, create the following config file:

root@ovz:/var/lib/vz/template# cat /etc/vz/vznet.conf
#!/bin/bash
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
root@ovz:/var/lib/vz/template#

The file specifies an external script that will add the containers virtual interface to the bridge we created earlier.

Let's configure our container with a network interface, by specifying the name of the interfaces inside and outside the container, and the bridge they should be connected to:

root@ovz:/var/lib/vz/template# vzctl set 1 --save --netif_add eth0,,veth1.eth0,,br0
CT configuration saved to /etc/vz/conf/1.conf
root@ovz:/var/lib/vz/template#

List the available containers on the host by executing the following command:

root@ovz:/var/lib/vz/template# vzlist -a
CTID      NPROC STATUS    IP_ADDR      HOSTNAME
 1          - stopped      -               -
root@ovz:/var/lib/vz/template# cd

To start our container, run the following command:

root@ovz:~# vzctl start 1
Starting container...
Container is mounted
Setting CPU units: 1000
Configure veth devices: veth1.eth0
Adding interface veth1.eth0 to bridge br0 on CT0 for CT1
Container start in progress...
root@ovz:~#

Then, to attach, or enter the container, execute the following commands:

root@ovz:~# vzctl enter 1
entered into CT 1
root@localhost:/# exit
logout
exited from CT 1
root@ovz:~#

Manipulating the available container resources can be done on the fly, without the need for restarting the container, very much like with LXC. Let's set the memory to 1 GB:

root@ovz:~# vzctl set 1 --ram 1G --save
UB limits were set successfully
CT configuration saved to /etc/vz/conf/1.conf
root@ovz:~#

Every OpenVZ container has a config file, which is updated when passing the --save option to the vzctl tool. To examine it, run the following command:

root@ovz:~# cat /etc/vz/conf/1.conf | grep -vi "#" | sed '/^$/d'
PHYSPAGES="0:262144"
SWAPPAGES="0:512M"
DISKSPACE="2G:2.2G"
DISKINODES="131072:144179"
QUOTATIME="0"
CPUUNITS="1000"
NETFILTER="stateless"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/$VEID"
VE_LAYOUT="simfs"
OSTEMPLATE="ubuntu-16.04-x86_64"
ORIGIN_SAMPLE="vswap-256m"
NETIF="ifname=eth0,bridge=br0,mac=00:18:51:A1:C6:35,host_ifname=veth1.eth0,host_mac=00:18:51:BF:1D:AC"
root@ovz:~#

With the container running, ensure the virtual interface on the host is added to the bridge. Note that the bridge interface itself is in a DOWN state:

root@ovz:~# brctl show
bridge name bridge id          STP enabled    interfaces
br0         8000.001851bf1dac  no             veth1.eth0
root@ovz:~# ip a s
...
4: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 00:18:51:bf:1d:ac brd ff:ff:ff:ff:ff:ff
6: veth1.eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:18:51:bf:1d:ac brd ff:ff:ff:ff:ff:ff
    inet6 fe80::218:51ff:febf:1dac/64 scope link
      valid_lft forever preferred_lft forever
root@ovz:~#

We can execute commands inside the container without the need to attach to it. Let's configure an IP address to the containers' interface:

root@ovz:~# vzctl exec 1 "ifconfig eth0 192.168.0.5"
root@ovz:~#

Bring the bridge interface on the host up and configure an IP address, so we can reach the container from the host:

root@ovz:~# ifconfig br0 up
root@ovz:~# ifconfig br0 192.168.0.1

Let's test connectivity:

root@ovz:~# ping -c3 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5: icmp_req=1 ttl=64 time=0.037 ms
64 bytes from 192.168.0.5: icmp_req=2 ttl=64 time=0.036 ms
64 bytes from 192.168.0.5: icmp_req=3 ttl=64 time=0.036 ms
--- 192.168.0.5 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.036/0.036/0.037/0.005 ms
root@ovz:~#

Let's enter the container and make sure the available memory is indeed 1 GB, as we set it up earlier:

root@ovz:~# vzctl enter 1
entered into CT 1
root@localhost:/# free -g
total    used    free   shared  buff/cache   available
Mem:     1        0       0         0           0           0
Swap:    0        0       0
root@localhost:/# exit
logout
exited from CT 1
root@ovz:~#

Notice how the OpenVZ container uses init to start all other processes, just like a virtual machine:

root@ovz:~# ps axfww
...
3303 ?        Ss     0:00 init -z
3365 ?        Ss     0:00  \_ /lib/systemd/systemd-journald
3367 ?        Ss     0:00  \_ /lib/systemd/systemd-udevd
3453 ?        Ss     0:00  \_ /sbin/rpcbind -f -w
3454 ?        Ssl    0:00  \_ /usr/sbin/rsyslogd -n
3457 ?        Ss     0:00  \_ /usr/sbin/cron -f
3526 ?        Ss     0:00  \_ /usr/sbin/xinetd -pidfile /run/xinetd.pid -stayalive -inetd_compat -inetd_ipv6
3536 ?        Ss     0:00  \_ /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
3540 ?        S      0:00  |   \_ /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
3542 ?        Ss     0:00  \_ /usr/sbin/apache2 -k start
3546 ?        Sl     0:00  |   \_ /usr/sbin/apache2 -k start
3688 ?        Ss     0:00  \_ /usr/lib/postfix/sbin/master
3689 ?        S      0:00  |   \_ pickup -l -t unix -u -c
3690 ?        S      0:00  |   \_ qmgr -l -t unix -u
3695 ?        Ss     0:00  \_ /usr/sbin/sshd -D
3705 tty1     Ss+    0:00  \_ /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt220
3706 tty2     Ss+    0:00  \_ /sbin/agetty --noclear tty2 linux
root@ovz:~#

We now know that all container implementations use cgroups to control system resources and OpenVZ is no exception. Let's see where the cgroup hierarchies are mounted:

root@ovz:~# mount | grep cgroup
beancounter on /proc/vz/beancounter type cgroup (rw,relatime,blkio,name=beancounter)
container on /proc/vz/container type cgroup (rw,relatime,freezer,devices,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,relatime,cpuacct,cpu,cpuset,name=fairsched)
tmpfs on /var/lib/vz/root/1/sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=131072k,nr_inodes=32768,mode=755)
cgroup on /var/lib/vz/root/1/sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /var/lib/vz/root/1/sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /var/lib/vz/root/1/sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio,name=beancounter)
root@ovz:~#

The container we created earlier has an ID of 1 as we saw in the earlier example. We can grab the PIDs of all processes running inside the container by running the following command:

root@ovz:~# cat /proc/vz/fairsched/1/cgroup.procs
3303
3304
3305
3365
3367
3453
3454
3457
3526
3536
3540
3542
3546
3688
3689
3690
3695
3705
3706
root@ovz:~#

We can also obtain the number of CPUs assigned to the container:

root@ovz:~# cat /proc/vz/fairsched/1/cpu.nr_cpus
0
root@ovz:~#

Let's assign two cores to the container with ID 1:

root@ovz:~# vzctl set 1 --save --cpus 2
UB limits were set successfully
Setting CPUs: 2
CT configuration saved to /etc/vz/conf/1.conf
root@ovz:~#

Then ensure the change is visible in the same file:

root@ovz:~# cat /proc/vz/fairsched/1/cpu.nr_cpus
2
root@ovz:~#

The container's configuration file should also reflect the change:

root@ovz:~# cat /etc/vz/conf/1.conf | grep -i CPUS
CPUS="2"
root@ovz:~#

Using the ps command or, by reading the preceding system file, we can get the PID of the init process inside the container, in this example, 3303. Knowing that PID, we can get the ID of the container by running the following command:

root@ovz:~# cat /proc/3303/status | grep envID
envID:      1
root@ovz:~#

Since the root filesystem of the container is present on the host, migrating an OpenVZ instance is similar to LXC – we first stop the container, then archive the root filesystem, copy it to the new server, and extract it. We also need the config file for the container. Let's see an example of migrating OpenVZ container to a new host:

root@ovz:~# vzctl stop 1
Stopping container ...
Container was stopped
Container is unmounted
root@ovz:~#

root@ovz:~# tar -zcvf /tmp/ovz_container_1.tar.gz -C /var/lib/vz/private 1
root@ovz:~# scp  /tmp/ovz_container_1.tar.gz 10.3.20.31:/tmp/
root@ovz:~# scp /etc/vz/conf/1.conf 10.3.20.31:/etc/vz/conf/
root@ovz:~#

On the second server, we extract the root filesystem:

root@ovz2:~# tar zxfv /tmp/ovz_container_1.tar.gz --numeric-owner -C /var/lib/vz/private
root@ovz2:~#

With the config file and the filesystem in place, we can list the container by running the following command:

root@ovz2:~# vzlist -a
stat(/var/lib/vz/root/1): No such file or directory
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
1          - stopped       -               -
root@ovz2:~# 

Finally, to start the OpenVZ instance and ensure it's running on the new host, execute the following command:

root@ovz2:~# vzctl start 1
Starting container...
stat(/var/lib/vz/root/1): No such file or directory
stat(/var/lib/vz/root/1): No such file or directory
stat(/var/lib/vz/root/1): No such file or directory
Initializing quota ...
Container is mounted
Setting CPU units: 1000
Setting CPUs: 2
Configure veth devices: veth1.eth0
Container start in progress...
root@ovz2:~# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
1         47 running       -               -
root@ovz2:~#

OpenVZ does not have a centralized control daemon, which provides easier integration with init systems such as upstart or systemd. It's important to note that OpenVZ is the foundation for the Virtuozzo virtualization solution offered by the Virtuozzo company, and its latest iteration will be an ISO image of an entire operating system, instead of running a patched kernel with a separate toolset.

Note

For more information on the latest OpenVZ and Virtuozzo versions, visit https://openvz.org.