Setting up MySQL Asynchronous Replication for High Availability

Asynchronous Replication for High Availability
Asynchronous Replication for High Availability

MySQL replication is often positioned as a solution to provide extra throughput for your database (especially when dealing with high numbers of read operations). What tends to be overlooked is how it can be used to provide high availability (HA) – no matter how much redundancy you have at a local level, your system remains at risk from a single catastrophic failure – loss of power, earthquake, terrorist attack etc. By using MySQL asynchronous replication to maintain a geographically remote copy of that data, service can be maintained through any single site failure.

As this replication is asynchronous, there are no low latency demands on the network and so the two sites can be thousands of miles apart while utilising low-cost connections.

This article provides a simple example of how to set up asynchronous replication between 2 MySQL databases in order to provide a Highly Available solution. First of all, it will be 2 databases where the tables will be stored in the MyISAM storage engine and then between 2 MySQL Cluster databases where I also configure it for Multi-master replication where changes can be made at either site.

Subsequent articles will build on this to show:

  • Collision detection and resolution when using MySQL Cluster multi-master asynchronous replication
  • Introducing asynchronous replication to a running MySQL Cluster database

Throughout this article, two machines are used: ws1 (192.168.0.3) and ws2 (192.168.0.4). ws1 will be set up as the master and ws2 as the slave (in the multi-master configuration, both act as both master and slave).

Setting up replication for non-Cluster databases

Replication is performed from one MySQL Server to another; the master makes the changes available and then one or more other Servers pick up those changes and apply them to their own databases. In this example, both databases will store the table data using the same storage engine (MyISAM) but it’s possible to mix and match (for example, take a look at  MySQL Cluster – flexibility of replication). As this is intended as a simple introduction to replication, I’m keeping life simple by assuming that this is all being set up before the database goes into production – if that isn’t the case for you and you need to cope with existing data then check out the MySQL documentation or subsequent articles on this site.

The my.cnf files can be set up as normal but the one for the MySQL Server that will act as the Master needs to have binary-logging enabled. Also, each of the server needs to have a unique server-id. Here are the my.cnf files used for this example:

my.cnf (Master)

my.cnf (Slave)

Fire up the Master MySQL Server:

The slave needs a userid/password in order to access the master server – best practice is to create a dedicated user with just the required privileges:

The slave can now be told to start processing the replication data that will be staged by the master server:

Now to test that replication is up and running, create a table on the master, add some data and then check that the table and data can be read from the slave:

Multi-Master Replication with MySQL Cluster

Multi-Master Replication for HA with MySQL Cluster
Multi-Master Replication for HA with MySQL Cluster

There are a few asynchronous replication capabilities that are unique to MySQL Cluster – one of those is that changes are replicated even if they are made directly to the data nodes using the NDB API, another is that replication can be performed in both directions i.e. multi-master. One of the advantages of this is you can share both read and write operations across both sites so that no capacity is wasted – it also gives you the confidence that either site is sane and ready to take over for the other at any point. You should aim to minimise how often the same rows are modified at the same time on both sites – conflict detection/resolution has been implemented but it can only roll-back the rows changes that conflict with other row changes rather than the full transaction.

It’s important to note that in this article, I’m not talking about the synchronous replication that takes place between data nodes within a single MySQL Cluster site (that happens in parallel and is orthogonal to the asynchronous replication to a remote site).

When performing multi-master asynchronous replication between 2 Clusters, 1 (or more) MySQL Servers in each Cluster is nominated as a master and 1 or more as slaves (it can be the same server that takes on both roles and you can have multiple channels set up in case a channel or MySQL Server is lost). Changes made to the data at either site through any of their MySQL Servers (or directly to the data nodes using the NDB API) will be replicated to the other site.

I will focus on setting up the replication, you can refer to Deploying MySQL Cluster over multiple hosts for the steps to configure and run each MySQL Cluster site.

Most of the steps are very similar to those in the first example – the main differences would come when introducing asynchronous replication to  a MySQL Cluster instance that already contains data and is up and running (processing updates) which will be covered in a subsequent article.

Binary logging needs to be enabled on the MySQL Server(s) at each site that will act as a replication master:

my1.cnf (Master)

my1.cnf (Slave)

The MySQL Cluster nodes (including the MySQL Servers (mysqld) from both sites should be started up as normal.

In this case, the replication users should be set up for both Clusters:

Replication can then be setup and started on each of the MySQL Servers (those acting as slaves for each Cluster):

This time, to make sure that replication is working in both directions, I make changes to both Clusters and then check that they appear at the other site:

9 thoughts on “Setting up MySQL Asynchronous Replication for High Availability

  1. Master Master Replication and Auto Increment – Mysql, Windows/Linux:

    Consider we’ve already set a master-master replication.
    Now create following table on Server1:

    CREATE TABLE temp (
    id int(10) NOT NULL auto_increment,
    PRIMARY KEY (id)
    ) ENGINE=MyISAM AUTO_INCREMENT=12 DEFAULT CHARSET=latin1;

    The table will will get replicated on Mysql Server2 in the master-master setup.

    Now insert value on Mysql Server1 as follows:
    insert into temp values(null);

    On Mysql Server2 in replication you will see single row inserted.
    Now insert one row from Mysql Server2 as follows:
    insert into temp values(null);

    You should see an error:
    Error ‘Duplicate entry ‘1’ for key ‘PRIMARY” on query…

    The obvious problem of maintaining auto increments in sync will persist on both mysql servers as AUTO_INCREMENT’s value.

    The solution is to use the variables auto_increment_increment and auto_increment_offset as explained below.

    – Stop both master-master replication servers.
    – Add variables to my.[ini|cnf] file.
    Eg. On both the Server’s my.[ini|cnf] add following lines:
    auto_increment_increment=1
    auto_increment_offset=2
    – Restart mysql servers.
    – Start slave.

    1. Krex,

      thanks for your response. If you set this same configuration on each server, would they not still have clashing auto-increment values? Wouldn’t it be better to set the increment to 2 on each server but the offset to 1 on one and 2 on the other? That way one server would generate odd values and the other even.

      Regards, Andrew.

  2. Andrew,

    You mentioned a future post would cover setting up multimaster replication using an already active cluster…is that still in the works? We’re considering doing that and I would love to read your thoughts on it.

    1. Yes – I do still plan on writing that post but unfortunately, vacation and business trips have kept me away from my target machines too much lately and it will be at least a week or so before I can write it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter. * Time limit is exhausted. Please reload CAPTCHA.