Book Image

High Availability MySQL Cookbook

By : Alexander Davies
Book Image

High Availability MySQL Cookbook

By: Alexander Davies

Overview of this book

High Availability is something that all web sites hope to achieve, especially those that are linked to big companies.MySQL, an open source relational database management system (RDBMS), can be made highly available to protect from corruption, hardware failure, software crashes, and user error. Running a MySQL setup is quite simple. Things start getting complex when you start thinking about the best way to provide redundancy. There are a large number of techniques available to add 'redundancy' and 'high availability' to MySQL, but most are both poorly understood and documented.This book will provide you with recipes showing how to design, implement, and manage a MySQL Cluster and achieve high availability using MySQL replication, block level replication, shared storage, and the open source Global File System (GFS).This book covers all the major techniques available for increasing availability of your MySQL databases. It demonstrates how to design, implement, troubleshoot and manage a highly available MySQL setup using any one of several techniques, which are shown in different recipes. It is based on MySQL Cluster 7.0, MySQL (for non clustered recipes) 5.0.77, and CentOS / RedHat Enterprise Linux 5.3.The book starts by introducing MySQL Cluster as a technology and explaining how to set up a simple cluster. It will help you to master the options available for backing up and restoring a file in the MySQL Cluster. By following the practical examples in this book, you will learn how to manage the MySQL Cluster. Further, we will discuss some troubleshooting aspects of the MySQL Cluster.We also have a look at achieving high availability for MySQL databases with the techniques of MySQL Replication, block level replication, shared storage (a SAN or NAS), and DRBD.Finally, you will learn the principles of Performance tuning and tune MySQL database for optimal performance.
Table of Contents (17 chapters)
High Availability MySQL Cookbook
Credits
About the Author
About the Reviewers
Preface
Base Installation
Index

Designing a MySQL Cluster


In this recipe, we will explain how to design a MySQL Cluster correctly. MySQL Cluster is an extremely powerful technology and this recipe will outline and briefly discuss some of the factors that you should consider while designing a MySQL Cluster.

We start with a high-level description of how a MySQL Cluster works in the How to do it... section. The How it work s... section explores the bits that make up a cluster in more detail and the There's more... section discusses the way that a MySQL Cluster stores and retrieves data.

How to do it...

MySQL Clusters are built from three different types of node. These three types of node, when connected together, allow a cluster to provide a cluster storage engine on MySQL servers for clients to connect to. To build a cluster, you must select the hardware on which you can run at least one type of each node. We now discuss these types of node and how they connect together.

Note

A node does not mean a single physical machine but a process that forms a part of a cluster. It is quite possible to run multiple nodes (that is, processes) on the same physical machine. For example, it is common to run a management node on the same host as the SQL node.

The three kinds of nodes that make up a MySQL Cluster are:

  1. Management node—these are the nodes that control information about the makeup of the cluster, provide a central point to collect the information such as logs and also to control other nodes. A management node must be started before any other node.

  2. Data or storage node—this is the ndbd process that holds the data in the cluster and does the low-level work of answering queries in conjunction with the other storage nodes in the cluster.

  3. API nodes—these are the nodes that connect to the cluster to extract the data—the most common example of this type of node is a mysqld process that is compiled to support MySQL Cluster which is commonly known as a SQL node. In this book, we use the term API and SQL node interchangeably except when referring to an API node that specifically is not a mysqld process.

To design a cluster with redundancy of operations (in other words, one that is highly available), you require at least one management node, two storage nodes, and two SQL nodes. A management node is only required when starting another node in your cluster—a cluster that is running will happily run even without a management node. Note that when there is no management node running in a cluster, there is no central point to control your cluster, view logs, and critically if another node fails, it will not be able to restart.

If you were building the simplest possible cluster, it would consist of:

  • Two similar servers, each running a storage and SQL node

  • One small server, running a management node

Three physical pieces of hardware are required to handle the case where a cluster is cut clean down the middle (for example, in a two-node cluster, one node's network cable is unplugged). This is called a split brain problem and is explored in more detail in the How it works… section that follows. In short, with only two nodes, in the event of nodes being unable to communicate (for example, when one node fails) both nodes must shut down to protect data consistency—which makes for a rather pointless cluster.

When it comes to calculating how many storage nodes you actually require, the recipe Calculating DataMemory and IndexMemory in Chapter 3, MySQL Cluster Management will tell you the total amount of memory required for your cluster. From a function of this number, the desired level of redundancy (see the following information box), the most cost-efficient amount of RAM to fit per server, and the performance required it is possible to calculate the optimum number of servers required for storage nodes and the RAM requirement for each. For a simple test cluster of two nodes, you simply require enough spare RAM per storage node to carry all of the data that you plan to store in the cluster, plus a little more space (approximately 20 percent) to handle overheads and indexes.

Note

The level of availability refers to the number of servers that you wish to store each fragment of data on for redundancy. This is known as NoOfReplicas and is difficult to change in the future—often it is set to 2 or 4. Your number of data nodes must be a multiple of NoOfReplicas.

It is an extremely good practice to keep SQL nodes and storage nodes on different servers in order to prevent a large query swapping and crashing the storage node located on the same server, so in practice a cluster size of 3 is unusual.

The nodes in your cluster absolutely require uninterrupted and private network connections between them. If this is not the case, there are the following three problems:

  1. Firewalls can cause bizarre behavior as MySQL Cluster daemons use a wide range of ports.

  2. Data sent between nodes in a cluster is not encrypted. Therefore, anyone with access to that network can access all data stored in the cluster.

  3. No form of security exists in communication between nodes, so anyone with access to the storage or management nodes can, for example, shutdown the cluster or inject their own data.

To avoid this, connect your cluster nodes to a private, non-firewalled network and dedicated switch, and protect the public interface with a good firewall.

MySQL Clusters must be built with nodes having very low latency connections—generally, just a pair of Ethernet switches. It is not possible to build a cluster over a higher-latency link (such as the Internet), although replication between MySQL Clusters is covered in Chapter 3.

How it works…

MySQL Cluster sits at the storage engine layer of a MySQL database server, with a storage engine known as NDBCLUSTER. This means that for clients connecting to that MySQL server, a MySQL Cluster table is exactly the same as a local InnoDB or MyISAM table. It is also quite conventional to only have some tables configured to use MySQL Cluster, as the following diagram demonstrates with a single MySQL server (mysqld process) running one database (database1) with three table types—MyISAM tables, InnoDB tables, and MySQL Cluster tables (NDBCLUSTER).

All of the cluster magic that allows physical servers holding parts of your database to fail without causing downtime is handled below the level of the MySQL server on which an incoming query is processed. The following diagram shows a MySQL Server (mysqld) connecting to a four-storage node MySQL Cluster, a local MyISAM table, and an InnoDB table stored on an external disk array. The client cannot tell the difference between these three types of tables.

MySQL Cluster has a shared nothing property, which means that unlike most clustering solutions, there is genuinely no single point of failure. On the other hand, in many other systems including some that we will cover later in this book, there is what can be considered a very reliable single point of failure—often a redundant shared disk system which can still fail as a result of a single event, such as a physical problem with the unit.

To achieve this no single point of failure architecture, MySQL Clusters store all of the data in the cluster on more than one node, which obviously has a performance impact. To mitigate this potential impact, most production clusters store both data and indexes in storage node memory (RAM).

Storing data in memory sounds scary and it is possible to configure tables to be stored on the disks of the storage nodes (covered in a later recipe). However, RAM-based storage provides significantly greater performance. By ensuring that data is stored on at least two different physical servers at a time it is unlikely that a failure (such as a disk drive or PSU) will occur in all the nodes holding a fragment of data at the same time.

However, it is still of course possible that all the nodes could fail (for example, in a data center-wide power failure). In order to ensure that this does not result in a loss of all data in the cluster, running storage nodes are constantly check pointing the data stored in the memory to a persistent storage on the disk.

In clusters consisting of more than two storage nodes, it is possible for all the servers holding a single fragment of data to fail. In this case, the cluster shuts itself down to ensure data consistency—this process is covered in more detail in the following There's more… section.

There's more…

In the background, a MySQL Cluster works by chopping up (also known as partitioning) your data into chunks (known as partitions, or by the preferred MySQL Cluster term fragments) and storing each fragment on as many different servers (data nodes) as you have selected. In this section, this process is explained in more detail. This is important to understand for anything other than the most superficial use of MySQL Cluster.

This process is shown in the following diagram, which shows a cluster design for two data nodes and two copies of each fragment of data to be held within the cluster. The MySQL Cluster has automatically worked out that it needs to partition our data into four partitions. Given this, it will ensure that each node has two fragments of data and that any single node does not have two identical chunks. It can be shown as follows:

In clusters where there are more storage nodes than the number of copies of each piece of data (which in many clusters is two, that is, each fragment is stored on two separate nodes), the cluster must further split the storage nodes into nodegroups. Nodegroups are groups of storage nodes that store the same fragments of data, and as long as one node in each nodegroup remains available, the cluster will have an entire copy of the data.

While each node in a nodegroup has the same data, each fragment within the cluster has a primary copy and one or more replicas (the number depending on NoOfReplicas again). The primary copy ("fragment replica") for each fragment or partition will be moved around automatically by the cluster to be spread out among the nodes for performance reasons. In the case of two nodes per nodegroup, each node will have approximately 50 percent of the primary fragments.

In a MySQL Cluster, if we wish to change a piece of our data (one or more of the fragments), we must modify each copy of fragment. In other words, make the same change on every node that stores that fragment. MySQL Cluster will attempt to do this in parallel, that is, it will send the request to change the fragment simultaneously to all nodes containing the relevant fragment. However, until the change has been committed, the transaction remains uncommitted in case a node fails.

MySQL Cluster declares the transaction committed to the client once all active data nodes with the relevant fragment on them have received the request to update their fragments and a single storage node has committed all of the changes.

This process is called a two-phase commit, and while it increases data integrity significantly, it reduces performance. The speed of a cluster executing transactions is a function of the following parameters (with the first and last extremely quick, due to all the data being stored in memory rather than on disk):

  • Time taken to locate all nodes involved in a transaction

  • Network latency talking to all involved nodes

  • Bandwidth available for transferring data between nodes

  • Time taken for all nodes to retrieve and / or change relevant data

This process is almost certainly slower than just accessing data from a locally attached disk (or from a kernel cache of a disk) and therefore, MySQL Cluster will almost always be slower in terms of query execution time for low workloads. MySQL Cluster may be faster under very high load (where its near linear scalability kicks in, as the load is spread over more nodes) and is valuable at all demand levels for its high availability. When you are designing your cluster, consider how much cost and performance you are willing to trade for scalability and high availability.

When you are considering to deploy the MySQL Cluster, it is essential to have an idea of both the problems that the MySQL Cluster will not solve and its specific requirements.

Note

For a complete list of requirements and limitations, visit the online MySQL Cluster reference guide (accessible from http://dev.mysql.com/doc/).

The remainder of this section covers and explains the limitations that most commonly cause problems for a MySQL Cluster administrator.

  • Operating System requirements: MySQL Cluster runs on several operating systems, and is specifically supported by the following:

    • Linux (Red Hat and SUSE)

    • Solaris

    • Mac OS X

    • Windows

    When considering which of the supported operating systems to use, it is worth noting that far and away the most tested is the Linux operating system.

  • Limitations using indexes: There are some common limitations related to indexes inside MySQL Cluster tables as follows:

    • Full-text indexes do not work with MySQL Cluster. Consider using Sphinx (http://www.sphinxsearch.com/) and / or a separate table for your full-text searches, possibly using another high-availability technique such as MySQL Replication in order to run your intensive search queries against a replica without affecting performance on the master (Chapter 5, High Availability with MySQL Replication).

    • Text or BLOB fields cannot have indexes (however, VARCHAR fields can).

    • You may only have one AUTO_INCREMENT field per table. If your table does not already have a primary key when it was created or altered to MySQL Cluster, a hidden AUTO_INCREMENT primary key field will be created (and used for partitioning). If this happens, you will not be able to create another AUTO_INCREMENT field, even though you cannot see the one that exists. Therefore, ensure that you always define a primary key in your tables (which are often the AUTO_INCREMENT fields). One of the key differences between MySQL Cluster and InnoDB tables is that for clustered tables in InnoDB, foreign key constraints are simply ignored (this is the same behavior with the MyISAM storage engine).

  • Limitations using transactions: While MySQL Cluster is transactional, in general, it does not support very large individual transactions particularly well.

    The limit is difficult to quantify and depends on node performance, network connections, and number of transactions. However, in general, applications that use larger numbers of smaller transactions are more likely to experience fewer problems with MySQL Cluster. Therefore, if you have the choice, design the application that is to use MySQL Cluster for lots of small transactions wherever possible.

  • Common "Hard Limits" to reach: The following list of unchangeable limitations does vary significantly from release to release, but the limitations of the current version (that is, MySQL Cluster 7) are as follows:

    • The total number of objects (databases, tables, and indexes) cannot exceed 20320

    • The total number of attributes (columns and indexes) per table cannot exceed 128

    • The total size of a row cannot exceed 8 KB

    • The total number of storage nodes in a cluster cannot exceed 48

    • The total number of nodes (storage, management, and SQL) cannot exceed 255

  • Networking requirements: MySQL Clusters require inter-cluster network traffic to have extremely low latency (small round trip (ping) times) and almost no packet loss.

    If this is not the case, performance will generally be extremely poor and it is possible that nodes will continually be kicked out of the cluster for not replying to heartbeat packets from other nodes quickly enough. To achieve these requirements, it is desirable for all members of the cluster to be interconnected using the same switch infrastructure, which should have a speed of at least one gigabit.

    Any network design involving a layer-3 device (such as a router) should be avoided wherever possible (although with modern wire speed, layer 3 forwarding for network devices can be as fast as layer 2). It is not recommended to attempt to get a cluster to work over a large network such as the Internet.

    While technically not truly impossible, it is strongly recommended that you do not attempt to change the timeout values to configure a cluster over a high-latency link, as this won't really work properly!

    If there is a need to replicate data across a WAN, consider replication between clusters (covered in the Replication between MySQL Clusters recipe in Chapter 3). It is possible to use high-speed cluster interconnects or Unix-like shared-memory segments, which themselves provide for extremely low-latency and high-reliability links.

  • System RAM requirements and best practice: MySQL Cluster is extremely memory-intensive. Although, actual data can be stored both on disk and in memory (RAM), the performance of data in memory tables is, in most cases, better when compared with disk-based tables in terms of order of magnitude.

    Furthermore, even for disk-based tables, indexes (which can take up a significant amount of space) must still be stored in memory. Therefore, the RAM usage on data nodes is high, and the overall RAM requirement for a cluster is likely to be order of magnitude more than that required by a standalone MySQL server using InnoDB or MyISAM.

    There are two major points to consider at an early stage:

    • Firstly, 32-bit operating systems can have a problem allocating more than 2 gigabytes of RAM to a single process. They will also certainly have a problem addressing more than 4 GB RAM system-wide (even with special modifications to the 32-bit kernel to hack around this limit). Therefore, in most real-world clusters, a 64-bit operating system is likely to make more sense.

    • Secondly, if a MySQL data node does not have enough physical RAM, it will either run out of it completely, in which case the kernel's out of memory (OOM) killer will almost certainly kill a data node process, or the operating system will begin to swap. This will likely result in a poor performance as the data node will not be able to respond to the heartbeats in a reasonable time and therefore, will be ejected from the cluster.

  • Processor architecture requirements: MySQL will run on both 32-bit and 64-bit architectures for all supported operating systems. When we want to decide which one of these to select, it is worth to remember the limitations of 32-bit architectures on RAM and also considering that the MySQL Cluster storage node process is available in two forms—a single- and multi-threaded binary.

    The single-threaded version of the storage node process has been tested significantly. However, the multi-threaded binary is simpler to use when trying to run multiple versions of the single-threaded binary on a single machine.

    Operating systems can be described as little-endian or big-endian (endianness can be thought of as the byte ordering used by an operating system). There are two parts to this constraint:

    • Firstly, all machines used in the cluster must have the same architecture. For example, you cannot have an x86 management node talking to data nodes running on PowerPC.

    • Secondly, it is important to remember that the MySQL client API is not endian-sensitive. So your big-endian cluster can happily communicate with the applications running on both big- and-little-endian operating systems.