Book Image

Red Hat Enterprise Linux Server Cookbook

By : Jakub Gaj, Leemans
5 (1)
Book Image

Red Hat Enterprise Linux Server Cookbook

5 (1)
By: Jakub Gaj, Leemans

Overview of this book

Dominating the server market, the Red Hat Enterprise Linux operating system gives you the support you need to modernize your infrastructure and boost your organization’s efficiency. Combining both stability and flexibility, RHEL helps you meet the challenges of today and adapt to the demands of tomorrow. This practical Cookbook guide will help you get to grips with RHEL 7 Server and automating its installation. Designed to provide targeted assistance through hands-on recipe guidance, it will introduce you to everything you need to know about KVM guests and deploying multiple standardized RHEL systems effortlessly. Get practical reference advice that will make complex networks setups look like child’s play, and dive into in-depth coverage of configuring a RHEL system. Also including full recipe coverage of how to set up, configuring, and troubleshoot SELinux, you’ll also discover how secure your operating system, as well as how to monitor it.
Table of Contents (12 chapters)
11
Index

Configuring resources

Virtual machines require CPUs, memory, storage, and network access, similar to physical machines. This recipe will show you how to set up a basic KVM environment for easy resource management through libvirt.

A storage pool is a virtual container limited by two factors:

  • The maximum size allowed by qemu-kvm
  • The size of the disk on the physical machine

Storage pools may not exceed the size of the disk on the host. The maximum sizes are as follows:

  • virtio-blk = 2^63 bytes or 8 exabytes (raw files or disk)
  • EXT4 = ~ 16 TB (using 4 KB block size)
  • XFS = ~8 exabytes

Getting ready

For this recipe, you will need a volume of at least 2 GB mounted on /vm and access to an NFS server and export.

We'll use NetworkManager to create a bridge, so ensure that you don't disable NetworkManager and have bridge-utils installed.

How to do it…

Let's have a look into managing storage pools and networks.

Creating storage pools

In order to create storage pools, we need to provide the necessary details to the KVM for it to be able to create it. You can do this as follows:

  1. Create a localfs storage pool using virsh on /vm, as follows:
    ~]# virsh pool-define-as --name localfs-vm --type 
    dir --target /vm
    
  2. Create the target for the storage pool through the following command:
    ~# mkdir -p /nfs/vm
    
  3. Create an NFS storage pool using virsh on NFS server:/export/vm, as follows:
    ~]# virsh pool-define-as --name nfs-vm --type network --source-host nfsserver --source-path /export/vm –target /nfs/vm
    
  4. Make the storage pools persistent across reboots through the following commands:
    ~]# virsh pool-autostart localfs-vm
    ~]# virsh pool-autostart nfs-vm
    
  5. Start the storage pool, as follows:
    ~]# virsh pool-start localfs-vm
    ~]# virsh pool-start nfs-vm
    
  6. Verify that the storage pools are created, started, and persistent across reboots. Run the following for this:
    ~]# virsh pool-list
     Name                 State      Autostart
    -------------------------------------------
     localfs-vm           active     yes
     nfs-vm               active     yes
    

Querying storage pools

At some point in time, you will need to know how much space you have left in your storage pool.

Get the information of the storage pool by executing the following:

~]# virsh pool-info --pool <pool name>
Name:           nfs-vm
UUID:           some UUID
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       499.99 GiB
Allocation:     307.33 GiB
Available:      192.66 GiB

As you can see, this command easily shows you its disk space allocation and availability.

Tip

Be careful though; if you use a filesystem that supports sparse files, these numbers will most likely be incorrect. You will have to manually calculate the sizes yourself!

To detect whether a file is sparse, run ls -lhs against the file. The -s command will show an additional column (the first), showing the exact space that the file is occupying, as follows:

~]# ls -lhs myfile
121M -rw-------. 1 root root  30G Jun 10 10:27 myfile

Removing storage pools

Sometimes, storage is phased out. So, it needs to be removed from the host.

You have to ensure that no guest is using volumes on the storage pool before proceeding, and you need to remove all the remaining volumes from the storage pool. Here's how to do this:

  1. Remove the storage volume, as follows:
    ~]# virsh vol-delete --pool <pool name> --vol <volume name>
    
  2. Stop the storage pool through the following command:
    ~]# virsh pool-destroy --pool <pool name>
    
  3. Delete the storage pool using the following command:
    ~]# virsh pool-delete --pool <pool name>
    

Creating a virtual network

Before creating the virtual networks, we need to build a bridge over our existing network interface. For the sake of convenience, this NIC will be called eth0. Ensure that you record your current network configuration as we'll destroy it and recreate it on the bridge.

Unlike the storage pool, we need to create an XML configuration file to define the networks. There is no command similar to pool-create-as for networks. Perform the following steps:

  1. Create a bridge interface on your network's interface, as follows:
    ~]# nmcli connection add type bridge autoconnect yes con-name bridge-eth0 ifname bridge-eth0
    
  2. Remove your NIC's configuration using the following command:
    ~]# nmcli connection delete eth0
    
  3. Configure your bridge, as follows:
    ~]# nmcli connection modify bridge-eth0 ipv4.addresses <ip address/cidr> ipv4.method manual
    ~# nmcli connection modify bridge-eth0 ipv4.gateway <gateway ip address>
    ~]# nmcli connection modify bridge-eth0 ipv4.dns <dns servers>
    
  4. Finally, add your NIC to the bridge by executing the following:
    ~]# nmcli connection add type bridge-slave autoconnect yes con-name slave-eth0 ifname eth0 master bridge-eth0
    

For starters, we'll take a look at how we can create a NATed network similar to the one that is configured by default and called the default:

  1. Create the network XML configuration file, /tmp/net-nat.xml, as follows:
    <network>
      <name>NATted</name>
      <forward mode='nat'>
        <nat>
          <port start='1024' end='65535'/>
        </nat>
      </forward>
      <bridge name='virbr0' stp='on' delay='0'/>
      <ip address='192.168.0.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.0.2' end='192.168.0.254'/>
        </dhcp>
      </ip>
    </network>
  2. Define the network in the KVM using the preceding XML configuration file. Execute the following command:
    ~]# virsh net-define /tmp/net-nat.xml
    

Now, let's create a bridged network that can use the network bound to this bridge through the following steps:

  1. Create the network XML configuration file, /tmp/net-bridge-eth0.xml, by running the following:
    <network>
        <name>bridge-eth0</name>
        <forward mode="bridge" />
        <bridge name="bridge-eth0" />
    </network>
  2. Create the network in the KVM using the preceding file, as follows:
    ~]# virsh net-define /tmp/net-bridge-eth0.xml
    

There's one more type of network that is worth mentioning: the isolated network. This network is only accessible to guests defined in this network as there is no connection to the "real" world.

  1. Create the network XML configuration file, /tmp/net-local.xml, by using the following code:
    <network>
      <name>isolated</name>
      <bridge name='virbr1' stp='on' delay='0'/>
      <domain name='isolated'/>
    </network>
  2. Create the network in KVM by using the above file:
    ~]# virsh net-define /tmp/net-local.xml
    

Creating networks in this way will register them with the KVM but will not activate them or make them persistent through reboots. So, this is an additional step that you need to perform for each network. Now, perform the following steps:

  1. Make the network persistent across reboots using the following command:
    ~]# virsh net-autostart <network name>
    
  2. Activate the network, as follows:
    ~]# virsh net-start <network name>
    
  3. Verify the existence of the KVM network by executing the following:
    ~]# virsh net-list --all
     Name                 State      Autostart     Persistent
    ----------------------------------------------------------
     bridge-eth0          active     yes           yes
     default              inactive   no            yes
     isolated             active     yes           yes
     NATted               active     yes           yes
    

Removing networks

On some occasions, the networks are phased out; in this case, we need to remove the network from our setup.

Prior to executing this, you need to ensure that no guest is using the network that you want to remove. Perform the following steps to remove the networks:

  1. Stop the network with the following command:
    ~# virsh net-destroy --network <network name>
    
  2. Then, delete the network using this command:
    ~]# virsh net-undefine --network <network name>
    

How it works…

It's easy to create multiple storage pools using the define-pool-as command, as you can see. Every type of storage pool needs more, or fewer, arguments. In the case of the NFS storage pool, we need to specify the NFS server and export. This is done by specifying--source-host and--source-path respectively.

Creating networks is a bit more complex as it requires you to create a XML configuration file. When you want a network connected transparently to your physical networks, you can only use bridged networks as it is impossible to bind a network straight to your network's interface.

There's more…

The storage backend created in this recipe is not the limit. Libvirt also supports the following backend pools:

Local storage pools

Local storage pools are directly connected to the physical machine. They include local directories, disks, partitions, and LVM volume groups. Local storage pools are not suitable for enterprises as these do not support live migration.

Networked or shared storage pools

Network storage pools include storage shared through standard protocols over a network. This is required when we migrate virtual machines between physical hosts. The supported network storage protocols are Fibre Channel-based LUNs, iSCSI, NFS, GFS2, and SCSI RDMA.

By defining the storage pools and networks in libvirt, you ensure the availability of the resources for your guest. If, for some reason, the resource is unavailable, the KVM will not attempt to start the guests that use these resources.

When checking out the man page for virsh (1), you will find a similar command to net-define, pool-define: net-create, and pool-create (and pool-create-as). The net-create command, similar to pool-create and pool-create-as, creates transient (or temporary) resources, which will be gone when libvirt is restarted. On the other hand, net-define and pool-define (as also pool-define-as) create persistent (or permanent) resources, which will still be there after you restart libvirt.

See also

You can find out more on libvirt storage backend pools at https://libvirt.org/storage.html

More information on libvirt networking can be found at http://wiki.libvirt.org/page/Networking