Book Image

MySQL Admin Cookbook LITE: Replication and Indexing

Book Image

MySQL Admin Cookbook LITE: Replication and Indexing

Overview of this book

Table of Contents (3 chapters)

Setting up automatically updated slaves of a server based on a SQL dump

In this recipe, we will show you how to prepare a dump file of a MySQL master server and use it to set up one or more replication slaves. These will automatically be updated with changes made on the master server over the network.

Getting ready

You will need a running MySQL master database server that will act as the replication master and at least one more server to act as a replication slave. This needs to be a separate MySQL instance with its own data directory and configuration. It can reside on the same machine if you just want to try this out. In practice, a second machine is recommended because this technique's very goal is to distribute data across multiple pieces of hardware, not place an even higher burden on a single one.

For production systems, you should pick a time to do this when there is a lighter load on the master machine, often during the night when there are less users accessing the system. Taking the SQL dump uses some extra resources, but unless your server is maxed out already, the performance impact usually is not a serious problem. Exactly how long the dump will take depends mostly on the amount of data and speed of the I/O subsystem.

You will need an administrative operating system account on the master and the slave servers to edit the MySQL server configuration files on both of them. Moreover, an administrative MySQL database user is required to set up replication.

We will just replicate a single database called sakila in this example.

Tip

Replicating more than one database

In case you want to replicate more than one schema, just add their names to the commands shown below. To replicate all of them, just leave out any database name from the command line.

How to do it...

  1. At the operating system level, connect to the master machine and open the MySQL configuration file with a text editor. Usually it is called my.ini on Windows and my.cnf on other operating systems.
  2. On the master machine, make sure the following entries are present and add them to the [mysqld] section if not already there:
    server-id=1000
    log-bin=master-bin
    

    If one or both entries already exist, do not change them but simply note their values. The log-bin setting need not have a value, but can stand alone as well.

  3. Restart the master server if you need to modify the configuration.
  4. Create a user account on the master that can be used by the slaves to connect:
    master> grant replication slave on *.* to 'repl'@'%' identified by 'slavepass';
    
  5. Using the mysqldump tool included in the default MySQL install, create the initial copy to set up the slave(s):
    $ mysqldump -uUSER -pPASS --master-data --single-transaction
    sakila > sakila_master.sql
    
  6. Transfer the sakila_master.sql dump file to each slave you want to set up, for example, by using an external drive or network copy.
  7. On the slave, make sure the following entries are present and add them to the [mysqld] section if not present:
    server-id=1001
    replicate-wild-do-table=sakila.%
    

    When adding more than one slave, make sure the server-id setting is unique among master and all slaves.

  8. Restart the slave server.
  9. Connect to the slave server and issue the following commands (assuming the data dump was stored in the /tmp directory):
    slave> create database sakila;
    slave> use sakila;
    slave> source /tmp/sakila_master.sql;
    slave> CHANGE MASTER TO master_host='master.example.com', master_port=3306, master_ user='repl', master_password='slavepass';
    slave> START SLAVE;
    
  10. Verify the slave is running with:
slave> SHOW SLAVE STATUS\G
************************** 1. row ***************************
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...

How it works...

Some of the instructions discussed in the previous section are to make sure that both master and slave are configured with different server-id settings. This is of paramount importance for a successful replication setup. If you fail to provide unique server-id values to all your server instances, you might see strange replication errors that are hard to debug.

Moreover, the master must be configured to write binlogs—a record of all statements manipulating data (this is what the slaves will receive).

Note

Before taking a full content dump of the sakila demo database, we create a user account for the slaves to use. This needs the REPLICATION SLAVE privilege.

Then a data dump is created with the mysqldump command line tool. Notice the provided parameters --master-data and --single-transaction. The former is needed to have mysqldump include information about the precise moment the dump was created in the resulting output. The latter parameter is important when using InnoDB tables, because only then will the dump be created based on a transactional snapshot of the data. Without it, statements changing data while the tool was running could lead to an inconsistent dump.

The output of the command is redirected to the /tmp/sakila_master.sql file. As the sakila database is not very big, you should not see any problems. However, if you apply this recipe to larger databases, make sure you send the data to a volume with sufficient free disk space the SQL dump can become quite large. To save space here, you may optionally pipe the output through gzip or bzip2 at the cost of a higher CPU load on both the master and the slaves, because they will need to unpack the dump before they can load it, of course.

If you open the uncompressed dump file with an editor, you will see a line with a CHANGE MASTER TO statement. This is what --master-data is for. Once the file is imported on a slave, it will know at which point in time (well, rather at which binlog position) this dump was taken. Everything that happened on the master after that needs to be replicated.

Finally, we configure that slave to use the credentials set up on the master before to connect and then start the replication. Notice that the CHANGE MASTER TO statement used for that does not include the information about the log positions or file names because that was already taken from the dump file just read in.

From here on the slave will go ahead and record all SQL statements sent from the master, store them in its relay logs, and then execute them against the local data set.

Note

This recipe is very important because the following recipes are based on this! So in case you have not fully understood the above steps yet, we recommend you go through them again, before trying out more complicated setups.

See also

  • Avoiding duplicate server IDs