Please Note! — This post has now been superseded by the new Quick Start Guide.
For a more step to step approach you can also check the teams written documentation.
The multi master plugin for MySQL is here. MySQL Group Replication ensures virtual synchronous updates on any member in a group of MySQL servers, with conflict handling and failure detection. Distributed recovery is also in the package to ease the process of adding new servers to your server group.
How do you start? Just sit back, download MySQL Group Replication from http://labs.mysql.com/ and then let us begin this journey into the world of multi master MySQL.
Pre requisites
Under its hood, the group replication plugin is powered by a group communication toolkit. This is what decides which servers belong to the server group, performs failure detection and orders server messages. This last being the magic thing that allows the data to be consistent across all servers.
In its latest version, the plugin relies by default on XCom, a MySQL implementation of a variation of the Paxos algorithm. Among other advantages, XCom is bundled into the plugin and it’s cross platform as described in here.
Other option that is still available is the Corosync Cluster Engine. Corosync comes however with the limitation of only working in Linux and it also requires previous installation and configuration on every machine of a server group. To get the basic instructions on how to install Corosync and the recommended configurations, please see my blog post on the subject.
Plugin’s required configurations
As any new feature, the group replication plugin includes some limitations and requirements that emerge from its underlying characteristics. When configuring a server and your data for a multi master scenario you will need:
1) Have the binlog active and its logging format set to row.
Like on standard replication, multi master replication is based on the transmission of log
events. Its inner mechanism are however based on the write sets generated during row based logging so row based replication is a requirement.
1 2 3 |
Server options: --log-bin --binlog-format=row |
2) Have GTIDs enabled.
MySQL Group Replication depends on GTIDs, used to identify what transactions were
executed in the group, and for that reason vital to the certification and distributed recovery processes.
1 2 3 4 |
Server options: --gtid-mode=on --enforce-gtid-consistency --log-slave-updates |
3) Use the InnoDB engine.
Synchronous multi master is dependent on transactional tables so only InnoDB is supported. Since this is now the default engine, you only have to be careful when creating individual tables.
4) Every table must have a primary key.
Multi master concurrency control is based on primary keys. Every change made to a table line is indexed to its primary key so this is a fundamental requirement.
5) Table based repositories
The relay log info and master info repositories must have their type set to TABLE.
Since group replication relies on channels for its applier and recovery mechanism, table repositories are needed to isolate their execution info.
1 2 3 |
Server options: --master-info-repository=TABLE --relay-log-info-repository=TABLE |
6) Set the transaction write set extraction algorithm
The process of extracting what writes were made during a transaction is crucial for conflict detection on all group servers. This extracted information is then hashed using a specified algorithm that must be chosen upfront. Currently the only available algorithm is MURMUR32
1 2 |
Server options: --transaction-write-set-extraction=MURMUR32 |
Other current limitations:
1) Binlog Event checksum use must be OFF.
Due to needed changes in the checksum mechanism, group replication is incompatible with this feature for now.
1 2 |
Server options: --binlog-checksum=NONE |
2) No concurrent DDL
As it currently stands, DDL statements can’t be concurrently executed with other DDL queries or even DML.
Hands on with MySQL Group Replication
First of all set-up a group of servers of the required version and then grab the provided plugin binaries that come with the release or compile them yourself following the instruction in the multi platform blog post. If you are still using Corosync, you must have the its development libraries not only the daemon installed when compiling.
You can test this on your desktop, use a group of physical computers or even test it on several virtual machines. In this example, three machines are spawn from the same machine with different data folders.
-
Configure a server on a standalone folder and create a replication user
In case you are not used to MySQL and the creation of data directories, we show you here the basic commands to get you running. We also include basic instructions to create a replication user. This last is used to establish master slave connections between members for recovery purposes. Please follow the steps used for server1 and repeat them for the other test servers using different data folders.
On the base directory of your MySQL server execute:
1 |
./bin/mysqld --no-defaults --user=$USER --initialize --explicit_defaults_for_timestamp --basedir=. --datadir=data01 |
Start the server
1 |
./bin/mysqld --no-defaults --basedir=. --datadir=data01 -P 13001 --socket=mysqld1.sock |
You maybe noticed that a password was generated in the initialize command that can be used for root server connections. You can for this simple example avoid the use of passwords using the –initialize-insecure option.
If you instead, want to keep the password, you can change it executing:
1 |
./bin/mysqladmin -u root -h 127.0.0.1 -P 13001 -p password "new_pwd" |
Create a replication user (should be done for all nodes)
1 2 3 4 5 |
./bin/mysql -uroot -h 127.0.0.1 -P 13001 -p --prompt='server1>' server1> CREATE USER 'rpl_user'@'%' IDENTIFIED BY 'rpl_pass'; GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%'; |
- Note: we are creating the users in a server with its binary log disabled, otherwise that would probably lead to replication conflicts during recovery. To do this process in a replicated way, create a user in the first member of your group when already configured, and let it be replicated through recovery to the other members when they join.
Shut down the server
1 |
./bin/mysqladmin -h 127.0.0.1 -P 13001 -u root -p shutdown |
-
Start the servers with the plugin and all the necessary options
Note: The below instruction assume that you are using a Unix environment. For Windows, you will need to replace the uses of group_replication.so with group_replication.dll
Server 1
1 2 3 4 5 6 7 |
./bin/mysqld --no-defaults --basedir=. --datadir=data01 -P 13001 \ --socket=mysqld1.sock --log-bin=master-bin --server-id=1 \ --gtid-mode=on --enforce-gtid-consistency --log-slave-updates \ --binlog-checksum=NONE --binlog-format=row \ --master-info-repository=TABLE --relay-log-info-repository=TABLE \ --transaction-write-set-extraction=MURMUR32 \ --plugin-dir=lib/plugin --plugin-load=group_replication.so |
Server 2
1 2 3 4 5 6 7 |
./bin/mysqld --no-defaults --basedir=. --datadir=data02 -P 13002 \ --socket=mysqld2.sock --log-bin=master-bin --server-id=2 \ --gtid-mode=on --enforce-gtid-consistency --log-slave-updates \ --binlog-checksum=NONE --binlog-format=row \ --master-info-repository=TABLE --relay-log-info-repository=TABLE \ --transaction-write-set-extraction=MURMUR32 \ --plugin-dir=lib/plugin --plugin-load=group_replication.so |
Server 3
1 2 3 4 5 6 7 |
./bin/mysqld --no-defaults --basedir=. --datadir=data03 -P 13003 \ --socket=mysqld3.sock --log-bin=master-bin --server-id=3 \ --gtid-mode=on --enforce-gtid-consistency --log-slave-updates \ --binlog-checksum=NONE --binlog-format=row \ --master-info-repository=TABLE --relay-log-info-repository=TABLE \ --transaction-write-set-extraction=MURMUR32 \ --plugin-dir=lib/plugin --plugin-load=group_replication.so |
Alternatively, if you have a running server without the plugin loaded you can install it on run-time. This implies that you have GTID mode ON, row based logging and all the above requirements correctly configured.
1 |
./bin/mysql -uroot -h 127.0.0.1 -P 13001 --prompt='server1>' |
1 2 |
server1> INSTALL PLUGIN group_replication SONAME 'group_replication.so'; |
-
Configure
The first step on configuring a MySQL server group is to define a unique name that identifies the group and allows its members to join.
This name must be defined on every member, and since it works also as the group UUID, it must be a valid UUID.
1 |
SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562"; |
Besides this, you should also configure the access credentials for recovery.
These settings are used by joining servers to establish a slave connection to a donor when entering the group, allowing them to receive missing data. Ignored on the first member that forms the group, you should always configure it on every server anyway, as they may fail and be reinstated at any moment in time.
By default, recovery tries to connect to other servers using connections credentials set to “root” with no associated password. To change these values just set the following variables.
1 2 |
SET GLOBAL group_replication_recovery_user='rpl_user'; SET GLOBAL group_replication_recovery_password='rpl_pass'; |
Associated to these fields there is the also recovery’s retry count and reconnect interval.
These field tells recovery how many times it should try to connect to the available donors and how much time to wait when every attempt to connect to all group donors fails. By default, retry count is equal to 86400 and the reconnect interval is set to 60 seconds.
If you want to modify them, just use a variation of the example commands:
1 2 3 |
SET GLOBAL group_replication_recovery_retry_count= 2; SET GLOBAL group_replication_recovery_reconnect_interval= 120; |
XCom settings
If you are running Group replication with XCom, you need to set some options so it can find the other group members. For each member you will then have to set:
- group_replication_local_address: The member local address, i.e., host:port where that member will expose itself to be contacted by the group.
- group_replication_peer_addresses: The list of peers that also belong to the group. This list is comma separated: host1:port1,host2:port2. If the server is not configured to bootstrap a group it will sequentially contact the peers in the group in order to be added or removed from it. If the list contains its member local address, it will be ignored.
Important: Xcom ports must be different from the configured MySQL ports. Xcom is an internal service that must have its own dedicated port.
If you want to change the group communication system and use Corosync, you can still do it with:
1 |
SET GLOBAL group_replication_gcs_engine= "COROSYNC"; |
-
Start multi master replication
To bootstrap a new group, you must explicitly state that with:
1 2 |
server1> SET GLOBAL group_replication_bootstrap_group= 1; |
This flag shall be set on the first member of each group and should be set again to 0 when the member is online. This reset is necessary in order to it to be able to leave and rejoin, if needed, the *same* group instead of starting a new one.
You should also configure the member contact info when using XCom.
1 2 |
SET GLOBAL group_replication_local_address="127.0.0.1:10301"; SET GLOBAL group_replication_peer_addresses= "127.0.0.1:10301,127.0.0.1:10302,127.0.0.1:10303"; |
The member can now be started.
1 2 |
START GROUP_REPLICATION; SET GLOBAL group_replication_bootstrap_group= 0; |
-
Check the member status
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
server1> SELECT * FROM performance_schema.replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: 8a94f357-aab4-11df-86ab-c80aa9429562 SOURCE_UUID: 8a94f357-aab4-11df-86ab-c80aa9429562 THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: 8a94f357-aab4-11df-86ab-c80aa9429562:1 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 |
Here you can see the information about group replication trough the status of
its main channel. Besides the name you know the group replication applier is
running and that no error was detected.
-
Check the group members
1 2 3 4 5 6 7 8 9 |
server1> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: e221c36c-c652-11e4-956d-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13001 MEMBER_STATE: ONLINE |
-
Test query execution
Start server 2:
1 2 3 4 5 6 7 8 9 10 |
server2> SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562"; SET GLOBAL group_replication_local_address="127.0.0.1:10302"; SET GLOBAL group_replication_peer_addresses= "127.0.0.1:10301,127.0.0.1:10302,127.0.0.1:10303"; SET GLOBAL group_replication_recovery_user='rpl_user'; SET GLOBAL group_replication_recovery_password='rpl_pass'; START GROUP_REPLICATION; |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
server2> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: c55e10ed-c654-11e4-957a-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13002 MEMBER_STATE: ONLINE *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: e221c36c-c652-11e4-956d-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13001 MEMBER_STATE: ONLINE |
Insert some data on server 1:
1 2 3 4 5 6 |
server1> CREATE DATABASE test; CREATE TABLE test.t1 (c1 INT NOT NULL PRIMARY KEY) ENGINE=InnoDB; INSERT INTO test.t1 VALUES (1); |
Alternate between servers and check the data flow:
1 2 3 4 5 6 7 |
server2> SELECT * FROM test.t1; +----+ | c1 | +----+ | 1 | +----+ |
1 2 |
server2> INSERT INTO test.t1 VALUES (2); |
1 2 3 4 5 6 7 8 |
server1> SELECT * FROM test.t1; +----+ | c1 | +----+ | 1 | | 2 | +----+ |
-
See distributed recovery in action
When you start a new server, it will try to get all the data it is missing from the other group members. It will use the configured access credentials and connect to another member fetching the missing group transactions.
During this period its state will be shown as ‘RECOVERING’, and you should not preform any action on this server during this phase.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
server3> SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562"; SET GLOBAL group_replication_local_address="127.0.0.1:10303"; SET GLOBAL group_replication_peer_addresses= "127.0.0.1:10301,127.0.0.1:10302,127.0.0.1:10303"; SET GLOBAL group_replication_recovery_user='rpl_user'; SET GLOBAL group_replication_recovery_password='rpl_pass'; START GROUP_REPLICATION; SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 43d16968-c656-11e4-9583-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13003 MEMBER_STATE: RECOVERING *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: c55e10ed-c654-11e4-957a-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13002 MEMBER_STATE: ONLINE *************************** 3. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: e221c36c-c652-11e4-956d-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13001 MEMBER_STATE: ONLINE |
Wait for it to be online. Truth is that here, with such a small amount of data, this state is hard to spot. However, you should be aware of this when dealing with real data sets.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
server3> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 43d16968-c656-11e4-9583-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13003 MEMBER_STATE: ONLINE *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: c55e10ed-c654-11e4-957a-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13002 MEMBER_STATE: ONLINE *************************** 3. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: e221c36c-c652-11e4-956d-6067203feba0 MEMBER_HOST: localhost.domain MEMBER_PORT: 13001 MEMBER_STATE: ONLINE |
Check that the data is there:
1 2 3 4 5 6 7 8 |
server3> SELECT * FROM test.t1; +----+ | c1 | +----+ | 1 | | 2 | +----+ |
If you want to read more on how this is achieved, please check my blog post about Distributed Recovery.
-
Be aware of failures on concurrency scenarios
Due to the distributed nature of MySQL groups, concurrent updates can result on query failure if the queries are found to be conflicting. Lets perform a concurrent update to the same line in the example table
On server 1
1 2 |
server1> UPDATE test.t1 SET c1=4 where c1=1; |
On server 2
1 2 |
server2> UPDATE test.t1 SET c1=3 where c1=1; |
Execute in parallel
1 2 3 |
server1> UPDATE test.t1 SET c1=4 where c1=1; Query OK, 1 row affected (0.06 sec) |
1 2 3 |
server2> UPDATE test.t1 SET c1=3 where c1=1; ERROR 1181 (HY000): Got error 149 during ROLLBACK |
Note that, the scenario where the second update succeeds and the first one fails is also equally possible and only depends on the order the queries were ordered and certified inside the plugin.
Let’s check the tables.
1 2 3 4 5 6 7 8 |
server1> SELECT * from test.t1; +----+ | c1 | +----+ | 2 | | 4 | +----+ |
1 2 3 4 5 6 7 8 |
server2> SELECT * from test.t1; +----+ | c1 | +----+ | 2 | | 4 | +----+ |
The failed query rollbacks and no server is affected by this.
-
Check the execution stats
Check your GTID stats on each group member
1 2 3 4 5 6 7 |
server1> SELECT @@GLOBAL.GTID_EXECUTED; +------------------------------------------+ | @@GLOBAL.GTID_EXECUTED | +------------------------------------------+ | 8a94f357-aab4-11df-86ab-c80aa9429562:1-8 | +------------------------------------------+ |
1 2 3 4 5 6 7 |
server2> SELECT @@GLOBAL.GTID_EXECUTED; +------------------------------------------+ | @@GLOBAL.GTID_EXECUTED | +------------------------------------------+ | 8a94f357-aab4-11df-86ab-c80aa9429562:1-8 | +------------------------------------------+ |
Note that in all servers the GTID executed set is the same and belongs to the group.
You are maybe asking yourself why the set contains 8 transactions when we only executed 5 successful queries in this tutorial. The reason is that whenever a member joins or leaves the group a transaction is logged to mark in every member this moment for recovery reasons.
The member execution stats are also available on the performance schema tables.
1 2 3 4 5 6 7 8 9 10 11 12 |
server1> SELECT * FROM performance_schema.replication_group_member_stats\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 1425918173:3 MEMBER_ID: e221c36c-c652-11e4-956d-6067203feba0 COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 6 COUNT_CONFLICTS_DETECTED: 1 COUNT_TRANSACTIONS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8a94f357-aab4-11df-86ab-c80aa9429562:1-8 LAST_CONFLICT_FREE_TRANSACTION: 8a94f357-aab4-11df-86ab-c80aa9429562:8 |
Where it can be seen that, from the 6 executed queries on this tutorial, 1 was found to be conflicting. We can also see that the transaction in the queue are 0, so it means no transaction are waiting validation.
On the last fields, the transaction validating field is 0 because all transaction that were executed are already considered to be stable on all members, as seen in the second last field. In other words, every member knows all the data, so the number of possible conflicting transactions is 0.
However if you execute this query on server 3, the result will be different.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
server3> SELECT * FROM performance_schema.replication_group_member_stats\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 1425918173:3 MEMBER_ID: 43d16968-c656-11e4-9583-6067203feba0 COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 2 COUNT_CONFLICTS_DETECTED: 1 COUNT_TRANSACTIONS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: LAST_CONFLICT_FREE_TRANSACTION: 8a94f357-aab4-11df-86ab-c80aa9429562:8 |
Here we can see only two certified transactions as the remaining were all transmitted during recovery, but you can still see that the certification data is there as in the other members.
For more information on performance tables check our blog post on the subject.
-
Stop group replication
To stop the plugin, you just need to execute:
1 2 |
server1> STOP GROUP_REPLICATION; |
On the server shutdown, the plugin stops automatically.
-
Reset group replication channels
If after using group replication you want to remove the associated channel and files you can execute.
1 2 |
server1> RESET SLAVE ALL FOR CHANNEL "group_replication_applier"; |
Note that the “group_replication_applier” channel is not a normal slave channel and will not respond to generic commands like “RESET SLAVE ALL”.
-
How to start multi master replication at server boot
To enable the automatic start of multi master replication at server start two options are always needed.
The group name
1 |
--loose-group_replication_group_name="8a94f357-aab4..." |
The start on boot flag
1 |
--loose-group_replication_start_on_boot=1 |
Besides these two, and when not using the default parameters, you will also need to configure the access options that allow the members to connect to one another during recovery.
1 2 |
--loose-group_replication_recovery_user='rpl_user' --loose-group_replication_recovery_password='rpl_pwd' |
If using XCom, you need also the contact information for other members.
1 2 |
--loose-group_replication_local_address="127.0.0.1:10301"; --loose-group_replication_peer_addresses= "127.0.0.1:10301,127.0.0.1:10302,127.0.0.1:10303"; |
Try it now and send us your feedback
On his first steps, MySQL Group Replication is still in development.
Feel free to try it and get back at us so we can make it even better for the community!
43,780 total views, 16 views today
I have the exception when starting group replication – START GROUP_REPLICATION. It tries to read from the global_variables table. By grant SELECT on performance_schema.global_variables, I am able to start the replication. Just want to get your confirmation on this
Hi Ivan,
Thanks for the feedback!
About your issue, it sounds strange, as you have permissions to start group replication but not to access the performance schema table.
What permissions does your user have?
It will be great to have ‘group_replication.so’ for Ubuntu 14.04, especially for testing with MySQL Sandbox. It is much easier to install 3 MySQL servers and then just move plugin to related folder and start , rather than, compiling MySQL with plugin then installing MySQLs from source using MySQL Sandbox.
I am getting grant error for replication user.
2015-10-08T07:21:03.353445Z 8 [ERROR] Slave I/O for channel ‘group_replication_recovery’: The slave I/O thread stops because a fatal error is encountered when it try to get the value of SERVER_ID variable from master. Error: SELECT command denied to user ‘rpl_user’@’localhost’ for table ‘global_variables’, Error_code: 1142
Hi Ivan, Shahriyar
Your complains about lack of permissions should be related to
https://bugs.mysql.com/bug.php?id=77732
This is solved in 5.7.9 so it should not affect future releases.
Sorry for the inconvenience, for now please grant SELECT permissions for the replication user on performance_schema.global_variables.
I’m getting error for insatll plugin group replication
mysql> INSTALL PLUGIN group_replication SONAME ‘group_replication.so’;
ERROR 1126 (HY000): Can’t open shared library ‘/usr/lib64/mysql/plugin/group_replication.so’ (errno: 2 /usr/lib64/mysql/plugin/group_replication.so: undefined symbol: _Z26channel_is_applier_waitingPc)
I’m guessing that the plugin does not match the server version. I would recommend that you use the latest lab release, which is MySQL 5.7.14 based:
http://mysqlhighavailability.com/mysql-group-replication-a-quick-start-guide/
If you still have problems, please let us know.
Thanks!
many ths, I used still the new version:
[root@wdb01 /home/kunz]$ rpm -qa | grep -i ‘^mysql’
mysql-community-server-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-test-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-common-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-client-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-devel-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-embedded-devel-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-libs-compat-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-libs-5.7.14-1.labs_gr080.el6.x86_64
mysql-community-embedded-5.7.14-1.labs_gr080.el6.x86_64
OK. I’m assuming that the specific file you’re loading is provided by those RPMs? We can check the relevant versions with:
1. rpm -q –whatprovides /usr/lib64/mysql/plugin/group_replication.so
2. mysql> show global variables like “version%”;
The shared library *should* have that symbol:
bash# nm -a /usr/lib64/mysql/plugin/group_replication.so | grep channel_is_applier_waiting
U _Z26channel_is_applier_waitingPKc
I didn’t have find any new bugs:
1. [root@wdb01 /home/kunz]$ rpm -q –whatprovides /usr/lib64/mysql/plugin/group_replication.so
mysql-community-server-5.7.14-1.labs_gr080.el6.x86_64
2. mysql> show global variables like ‘version%’;
+————————-+——————————+
| Variable_name | Value |
+————————-+——————————+
| version | 5.7.14-labs-gr080-log |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+————————-+——————————+
4 rows in set (0,00 sec)
3. [root@wdb01 /home/kunz]$ nm -a /usr/lib64/mysql/plugin/group_replication.so | grep channel_is_applier_waiting
U _Z26channel_is_applier_waitingPc
hello
Can percona be used with MySQL Group Replication?
Percona Server 5.7 could be used with MySQL Group Replication. I would expect Percona to fully support it at some point.
I have been trying to get this plugin to work but I have ran into a problem. I have my three servers. I the replication installed and my bootstrap server started. Then I attempt to start the next server and when I do start group_configuration it fails – the log states:
Getting peer name failed while connecting to server x.x.x.x with error 113 -No route to host.
Now from the member server I can ping all other servers in the group and I can ssh from one server to the other – no problem. So why am I getting this error and more importantly what do I do to resolve it?
Many thanks
Hi Joseph,
This isn’t really an appropriate medium for general support. Please instead use the MySQL forums:
http://forums.mysql.com/list.php?177
Regarding this issue, ERROR CODE 113 is not a MySQL error code. It’s a linux networking error code:
EHOSTUNREACH: “No route to host”
Based on what you’ve said, I’m guessing that there’s a firewall blocking access between the machines on/via port 6606. Again, please see my “Note: If iptables” note here as an example:
http://mysqlhighavailability.com/mysql-group-replication-a-quick-start-guide/
This is a networking issue and not a Group Replication issue. We can’t use this medium for all manner of support. The forums are more appropriate. Or if you have a support agreement then please open a ticket and we’ll help get to the bottom of it. It will take quite a bit of back and forth to track down the cause and then address it.
Best Regards,
Matt
Hi Pedro Gomes :
I have met below Error , I confirm The MySQL 5.7.17 don’t have this variables, please help me , Thanks !
server1>SET GLOBAL group_replication_recovery_user=’rpl_user’;
ERROR 1193 (HY000): Unknown system variable ‘group_replication_recovery_user’
server1>SET GLOBAL group_replication_recovery_password=’rpl_pass’;
ERROR 1193 (HY000): Unknown system variable ‘group_replication_recovery_password’
server1>select version();
+————+
| version() |
+————+
| 5.7.17-log |
+————+
Hi Kelly,
This how to is for a old version of Group Replication that was released almost 2 years ago (time flies!).
There is now proper documentation, please see the getting started at
http://dev.mysql.com/doc/refman/5.7/en/group-replication-deploying-in-single-primary-mode.html
Thanks for your interest!
Hi,
IO is not connecting, trying to set up group repl locally in 3 instances.
Slave I/O for channel ‘group_replication_recovery’: error connecting to master ‘rpl_user@myhost:3306’ – retry-time: 60 retries: 1, Error_code: 2005
My gut feel is that somewhere I had to give my host as 127.0.0.1 , for this group_recovery_channel I’m not able to do ? Anything I’m missing out
Yeah, it could be that the server announces his host as being myhost and that host is not resolvable either locally or in the other members.
Usually you solve this by using the option “report host” to make the servers announce a valid host.
https://dev.mysql.com/doc/refman/5.7/en/replication-options-slave.html#option_mysqld_report-host
I hope it helps
Excellent. Thanks Pedro.
Works like a charm