MySQL Group Replication – 0.4.0 Labs Release

Hi all, it is time again to do another preview release of MySQL Group Replication, the plugin that brings multi-master update everywhere to MySQL, like we described in Hello World post.

We are very proud to do the third preview release of MySQL Group Replication, which introduces new exciting features, please enjoy the highlights!

Introduced changes

Plugin version access control

In an evolving product like Group Replication, it is important to ensure the correct functioning of the group by automatically check that all servers have a compatible plugin version.

This feature rules the access control of different plugin versions in a group. It dictates what action is taken when a previous or future plugin version joins a group, example a lower major or minor version, by default, it is not allowed to join a group.
This mechanism is of capital importance on upgrade procedures control.

Please read the blog post Plugin version access control on MySQL Group Replication for the full details.

Free-Standing Plugin

We now release the plugin as its own module. This meaning that the plugin won’t be tied to MySQL server release cycle, instead it will have its own release cycle!
The plugin is now built independently of the MySQL server code base and instead uses the well-defined interface of MySQL 5.7 to connect to the MySQL server.
This a step forward to shorter release cycles that will allow us to fulfill users requests sooner.

Every MySQL Group Replication version will be based on a MySQL server version, since the plugin does depend on server features/interfaces, but the plugin features can and will evolve over time.
On blog post MySQL Group Replication plugin packages we explain the details.

Bug Fixes

There are a lot of bugs that have been fixed in this third preview release of MySQL Group Replication, moving quickly towards a more stable, reliable and usable plugin.

Conclusion

Go to http://labs.mysql.com/ and try the new preview release of MySQL Group Replication following the instructions at Getting started with MySQL Group Replication and send us your feedback.

Note that this is not the GA yet, so don’t use it in production and expect bugs here and there. If you do experience bugs, we are happy to fix them. All you have to do is to file a bug in the bugs DB in that case.

MySQL Group Replication: Plugin Version Access Control.

Here is version 0.4.0 of the MySQL Group Replication plugin, our solution that gives you virtual synchronous updates on any member of a MySQL server group. With this new version you can expect a bunch of bug fixes and new features.

One of the new feature that marks this release is the access control of different plugin versions in a server group. In an evolving product like Group Replication, it is important to assure the correct functioning of the group by automatically check that all servers have a compatible plugin versions.

Continue reading MySQL Group Replication: Plugin Version Access Control.

MySQL Group Replication – 0.4.0 Labs Release Plugin Packages

The multi master plugin for MySQL is here. MySQL Group Replication provides virtually synchronous updates on any member in a group of MySQL servers, with conflict handling and automatic group membership management and failure detection.

You can read about all MySQL Group Replication features following the tag MySQL Group Replication.

On this blog post we will present the packages the MySQL Group Replication 0.4.0 offers and what are the tasks that you need to perform to install the plugin on the MySQL server.

Continue reading MySQL Group Replication – 0.4.0 Labs Release Plugin Packages

Replication Defaults in MySQL-5.7.7

As part of the release of MySQL-5.7.7, based on community feedback, improvements have been made with respect to replication defaults, which will make MySQL replication safer and easier to use.

This blog is intended to provide information about these default changes, and it briefly explains the advantages of having them.

1) Enable Simplified GTID recovery by default:

binlog_gtid_simple_recovery=TRUE by default.
https://dev.mysql.com/doc/refman/5.7/en/replication-options-gtids.html#sysvar_binlog_gtid_simple_recovery

This variable controls how binary log files are iterated during the search for GTIDs when MySQL starts or restarts.

Setting this option to TRUE gives improved recovery performance. Because of this option server startup and binary log purge are fast. When this parameter is set the server opens only the oldest and the newest binary log files and the values of gtid_purged and gtid_executed are computed based on only Previous_gtids_log_event or Gtid_log_event found in these files. This ensures only two binary log files are opened during server restart or when binary logs are being purged.

Setting this option to TRUE in MySQL-5.6 gives good performance but in some corner cases, the computation of the set of gtids in the binary log might be inaccurate. Setting this option to FALSE always computed correct results.

In MySQL-5.7.7 there is almost no trade-off between speed and safety. Setting this option to TRUE always computes the correct results except in some corner cases.

Corner cases include:

  • The newest binary log was generated by MySQL 5.7.5 or older, and gtid_mode was ON for some binary logs but OFF for the newest binary log.
  • A SET GTID_PURGED statement was issued on a MySQL version prior to 5.7.7, and the binary log that was active at the time of the SET GTID_PURGED has not yet been purged.

Hence the faster option is almost always better and it has been made the default.

If this feature is off, during recovery in order to initialize gtid_executed all binary logs starting with the newest file are examined and to initialize gtid_purged all binary logs starting from the oldest to newest are examined. This can potentially take a long time.

2) Set ROW based binary log format by default:

binlog-format=ROW by default.
http://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_binlog_format

When operating with the binary log ON and the binary log format is set to ROW mode, then changes to individual table rows are written to the binary log file. Then they will be installed on the slave side. This is different from when using STATEMENT format which instead makes the server write STATEMENTs in the binary log that will be re-executed on the slave.

All sorts of changes can be replicated and this is the safest form of replication. It requires fewer row locks when compared to statement based replication for certain SQL statements. With the previous default STATEMENT option nondeterministic statements could make the slave diverge from the master.

3) Make binlog_error_action=ABORT_SERVER the default:

‘binlog_error_action’ controls what action should be taken when the server cannot write to the binary log.
http://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_binlog_error_action

Setting binlog_error_action=ABORT_SERVER will make the server abort on fatal errors such as disk full, readonly file system, etc. With this ABORT_SERVER option the binary log and slaves are safe, and this is the reason we have changed the default.

With previous default binlog_error_action=IGNORE_ERROR, if an error occurs that prevents mysqld from writing to the binary log the server writes an appropriate error message into the error log and disables the binary log. Note that the server will continue to operate with the binary log disabled, thus causing the slave to miss the changes that happened after the error.

4) Enable crash safe binary logs by default:

Crash safe binary logs are controlled by the sync_binlog variable. This variable identifies the number of binary log commit groups to collect before synchronizing the binary log to disk.
http://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_sync_binlog

With sync_binlog=1, all transactions are synchronized to the binary log before they are committed. Therefore, even in the event of an unexpected restart, any transactions that are missing from the binary log are only in prepared state. This causes the server’s automatic recovery routine to rollback those transactions. This guarantees that no transaction is lost from the binary log, and is the safest option. Indeed, it increases the total number of fsyncs called, but since MySQL 5.6, the server groups transactions and fsync’s them together, which minimizes greatly a potential performance hit.

When sync_binlog=0, the binary log is never synchronized to disk by the MySQL server, in this case the server relies on the operating system to flush the binary log’s contents from time to time as for any other file. Therefore, in the event of a power failure or operating system crash, it is possible that the server has committed some transactions that have not been synchronized to the binary log. Therefore it is impossible for the recovery routine to recover these transactions and they will be lost from the binary log.

Hence the new default sync_binlog=1 is safer.

5) Lower default slave_net_timeout:

‘slave_net_timeout’ specifies the number of seconds to wait for more data from the master before the slave considers the connection broken, aborts the read, and tries to reconnect. This variable also has an impact on the frequency of heartbeats between master and slave, since the hearbeat period is
slave_net_timeout divided by 2 by default.
http://dev.mysql.com/doc/refman/5.7/en/replication-options-slave.html#option_mysqld_slave-net-timeout

The new default value is slave_net_timeout=60 while the previous default value was 3600 seconds. With the old value, long replication delays were possible due to transient network failures.

6) Deprecate @@session.gtid_executed:

‘@@global.gtid_executed’ variable contains a representation of the set of all transactions that are logged in the binary log.
http://dev.mysql.com/doc/refman/5.7/en/replication-options-gtids.html#sysvar_gtid_executed

When used in session scope this variable contains the set of transactions that are written to the transaction cache. More specifically, SESSION.GTID_EXECUTED is equal to UUID:NUMBER only when GTID_NEXT has the form UUID:NUMBER and at least one DML statement has been executed and not committed. When GTID_NEXT is set to other values, SESSION.GTID_EXECUTED contains an empty string. In MySQL-5.7.7 if this session scope variable is used a deprecation warning is generated. There is no change with respect to @@global.gtid_executed. Since @@global.gtid_executed is used most frequently and if a user forgets to mention the ‘global’ key word they will get this session variable which will give them incorrect data. To avoid this confusion the session variable is being deprecated. Also this session variable is not known to be useful for any practical purpose, hence it is being deprecated.

The JavaScript Connector for MySQL you never heard of

It’s easier now than ever before to stand up a web server and connect it to a database using node.js and Express. Using node.js to get data out of a relational database used to require that users become well versed in SQL technology.

So the MySQL dev team started on a project to make it trivial (well, easy) to store and retrieve documents from MySQL and deliver them to node.js applications. Without using SQL. The result is  called mysql-js and I’d like to introduce it to you. You can download it today at github.

The project started by looking at storing JSON documents that were relatively simple: each property of the document would be stored in a column in the database. This model allows storing simple documents in a way that could take full advantage of the relational data model. But storing complex properties was non-trivial.

So we developed a technique by which complex properties (e.g. arrays, nested JavaScript objects) would be serialized into JSON before being stored. When retrieving these properties from the database, the serialization process would be reversed and the original property would be restored.

With this change, we could store homogeneous documents that had a well-defined structure. But tables that stored documents with extra properties were still problematic. So we introduced a column that would store all properties that didn’t have their own column in the database.

With this data model in place, we concentrated on the data access APIs. We started with the basics: insert and delete. Since tables have a primary key column, which has a corresponding property in the document, storing a document is as simple as identifying which table it belongs to and writing it out.

Deleting documents is simple enough. Just identify the document you want to delete by its primary key or a unique key.

Finding documents once you store them requires that you identify the document by its primary or unique key. Queries of arbitrary properties that might return multiple documents is obviously more complex so we treated that as a separate issue.

Updating documents requires two things: identifying which document to update by specifying its primary key or a unique key, and specifying which properties to change.

The main user interface to the connector is the Session. In the MySQL architecture, a session is a server artifact that is responsible for accessing data on behalf of a user, receiving requests and delivering results. In the JavaScript connector, we use a local session to which you make requests, which the connector then forwards to the server session.

Code examples

Getting a session is straightforward. If you know the connection properties for your MySQL server, you can construct a regular JavaScript object and then ask the session service for a session to use. If your server uses the standard host and port, you can use the defaults that the connector already knows about. If you want to add credentials, add them to the connection properties:

var mySQLClient = require('mysql-js');
var properties = new mySQLClient.ConnectionProperties('mysql');
properties.user = 'user1';
properties.password = 'sekrit';
mySQLClient.openSession(properties, onOpenSession);

The session is delivered to you in a callback. Callbacks follow the node.js convention, which have at least two parameters: err and data. In this case, data is the session object that you use to interact with the database. The connector also supports Promises/A+ (more about that later).

CRUD Operations

Session supports CRUD operations, queries, and some utility functions. CRUD operations deal with one row in the database and one object in JavaScript. You can create objects via either JSON literal notation (created directly via the ‘{name: value}’ syntax) or constructor, allowing you to take advantage of object-oriented programming.

For the following examples, assume a table exists in the default database.

CREATE TABLE author (
  user_name varchar(20) NOT NULL PRIMARY KEY,
  full_name varchar(250),
  posts int unsigned not null default 0
) ENGINE=ndbcluster;

 Insert

To insert a row in the author table, use the insert function.

var craig = {'user_name': 'clr', 'full_name': 'Craig Russell'};
session.insert('author', craig, onInsert);

Since the posts column has a default, it does not need to be included in the insert data.

Insert or Update

To insert a row in the author table, or update the row if it already exists, use the save function. This corresponds to the SQL clause ON DUPLICATE KEY UPDATE [Some APIs use the term “write” but this doesn’t have much semantic content. Other APIs call this “upsert” but we found this term disturbing. We considered “indate” but that didn’t seem to help.]

var craig = {'user_name': 'clr', 'full_name': 'Craig Russell', posts:100};
session.save('author', craig, onSave);

Find

To find a single row in the database, use the find function. The key is a primitive that is the full primary key, an object whose properties include the primary key, or an object whose properties include a unique key.

function onFound(err, row) {
 // error handling omitted for clarity
 // prints {'user_name': 'clr', 'full_name': 'Craig Russell' 'posts': 100}
  console.log(row); 
}
// find(tableName, key, callback)
session.find('author', 'clr', onFound);

 Update

To update a single row in the database, use the update function. The key is used only to uniquely identify the row; the value is used only to update the properties specified.

// update(tableName, key, value, callback)
session.update('author', 'clr', {'full_name': 'Craig L Russell'}, onUpdate);

 Delete

To delete a single row in the database, use the delete function. [We aliased the t function with the remove function in case you don’t like your IDE telling you about your use of the delete keyword in an unexpected context.]

// delete(tableName, key, callback)
session.delete('author', 'clr', onDelete);

Using Constructors

You might want to use constructors to better organize your application. We support the Domain Model pattern elaborated in Martin Fowler’s excellent reference, Patterns of Enterprise Application Architecture. In this case, define the constructor as you wish and use it (or instances of it) in the session operations.

function Author(name, full_name) {
  if (name) this.user_name = name;
  if (full_name) this.full_name = full_name;
}
Author.prototype.getNumberOfPosts = function() {
  return this.posts;
}
Author.prototype.toString = function() {
  return ((this.posts > 100)?'Esteemed ':'') + 
      'Author: ' + this.name + 
      ' Full Name: ' + this.full_name + 
      ' posts: ' + this.posts;
}

There’s just one thing extra to do when using constructors. Currently, the constructor is annotated with the name of the table to use to store the data. We’re working on allowing the table name to default to the constructor function name.

new mySQLClient.TableMapping('author').applyToClass(Author);

Insert

To insert a row in the author table, use the insert function. Since you are using a constructor, there is no need to name the table. The connector will use the table in the TableMapping.

var craig = new Author('clr', 'Craig Russell';
session.insert(craig, onInsert);

Since the posts column has a default, it does not need to be included in the insert data.

Insert or Update

To insert a row in the author table, or update the row if it already exists, use the save function.

var craig = new Author('clr', 'Craig Russell');
craig.posts = 100;
session.save('author', craig, onSave);

Find

To find a single row in the database, use the find function.

function onFound(err, row) {
  // prints Author: clr Full Name: Craig Russell posts: 0
  console.log(row);
}
session.find(Author, 'clr', onFound);

 Update

To update a single row in the database, use the update function.

var key = new Author('clr');
var changes = new Author('', 'Craig L. Russell');
session.update(Author, key, changes, onUpdate);

 Delete

To delete a single row in the database, use the delete (or remove) function.

session.delete(Author, 'clr', onDelete);

 Promises

When using callbacks and JavaScript (in other words, when using node.js in the normal way) error handling code can obscure your application code. Promises are a way of writing much cleaner code because error handling is abstracted.

For example, code with error handling might look like this:

// find an object
function onSession(err, s) {
  session = s;
  if (err) {
    console.log('Error onSession:', err);
    process.exit(0);
  } else {
    session.find('Author', 'clr', onFindByTableName);
  }
};

There is only one line of actual code but it’s lost in the error handling.

Instead, using promises, you can write code like this. Each function called out in the then function will be executed in sequence.

var session;

function exit(code) {
  process.exit(code);
}
function setSession(s) {
  session = s;
}
function insertByConstructor() {
  return session.insert(Author, new Author('sam', 'Sammy Snead'));
}
function insertByTableName() {
  return session.insert('author', 
    {user_name: 'char', full_name: 'Charlene LeMain'});
}
function saveByConstructor() {
  return session.save(Author, new Author('sam', 'Sammy Snead'));
}
function saveByTableName() {
  return session.save('author', 
    {user_name: 'char', full_name: 'Charlene LeMain'});
}
function findByConstructor() {
  return session.find(Author, 'sam');
}
function findByTableName() {
  return session.find('author', 'char');
}
function updateByConstructor() {
  return session.update(Author, 'sam', 
    new Author(null, 'Samuel K Snead'));
}
function updateByTableName() {
  return session.update('author', 'char', 
    {full_name: 'Charlene K LeMain'});
}
function deleteByConstructor(sam) {
  return session.delete(Author, 'sam');
}
function deleteByTableName(char) {
  return session.delete('author', 'char');
}
function closeSession() {
  return session.close();
}
function reportSuccess() {
  session.sessionFactory.close();
  console.log('All done.');
  exit(0);
}
function reportError(e) {
  console.log('error:', e);
  exit(1);
}

mySQLClient.openSession(dbProperties, null)
.then(setSession)
.then(insertByTableName)
.then(insertByConstructor)
.then(saveByTableName)
.then(saveByConstructor)
.then(updateByTableName)
.then(updateByConstructor)
.then(findByTableName)
.then(deleteByTableName)
.then(findByConstructor)
.then(deleteByConstructor)
.then(closeSession)
.then(reportSuccess, reportError);

Promises requires that the implementation (the mysql-js connector) define a then method that takes two arguments: a function called after the asynchronous operation succeeds and returns a single value; and a function called only if the asynchronous operation fails and throws an exception. The application can use the then function to organize asynchronous functions in many ways, including error handling.

There are many more topics to discuss, including the use of transactions for guaranteed ACID properties, multiple back-end storage, complex queries, complex document storage and retrieval, and joining document-oriented tables with normalized relational tables. Stay tuned.

Fork it at Github

https://github.com/mysql/mysql-js

MySQL Binlog Events Use case and Examples

This is in continuation to the previous post, if you have not read that, I would advise you to go through that before continue reading.
I discussed about three use cases in my last post, here I will explain them in detail.

  1.  Extracting data from a binary log file:
    The binary log file is full of information, but what if you want selected info, for example:

    • printing the timestamp for every event in the binary log file.
    • extracting the event types of all the events occurring in the binary log file.

    And any other such data from an event in real time. Below is some sample code which shows how to do that step by step.

    • Connect to the available transport
      // the base class for both types of transport, the term driver and transport are
      // used interchangeably in code.
      binary_log::system::Binary_log_driver *drv;

      • For tcp transport
        // An example of the uri can be mysql://neha@localhost:3306  drv=binary_log::system::create_transport("user@host:port_no");
      • For file transport
        // The path should be absolute like
        // “/home/neha/binlog/masterbin.000001”
        drv=binary_log::system::create_transport("path of the binary log file");
      • Common code for both transports
        binary_log::Binary_log binlog(drv);
        // This internally calls methods from libmysqlclient library
        binlog.connect();
    • Fetch the stream in a buffer
      // In this pair object the first value will store stream buffer and second value
      // will store length buffer, only one event is received at a time.
      std::pair buffer_buflen;
      // This internally calls methods from libmysqlclient library.
      drv= get_next_event(&buffer_buflen);
    • Decode the buffer and get an event object
      binary_log::Binary_log_event *event; // To store the event object
      // This class contains method to decode all the binary log events
      Decoder decode;

      event= decode.decode_event((char *) buffer_buflen.first, buffer_buflen.second,
      NULL/* char array to store the error message*/, 1)
    • Now that the event object is created, it can be used to extract all the information about that particular event, for example
      std::cout<<”Event Data”<<std::endl;
      std::cout<<”Event type ”<<event->get_event_type()<<std::endl;
      std::cout<<”Data written ”<<event->header()->data_written<<std::endl;
      std::cout<<”Binlog Position ”<<event->header()->log_pos<<std::endl;

    You can see the complete code below.

    Screenshot from 2015-04-21 13:25:28

    The above example will print just one event, if you want to continue reading events, place the code for decoding and printing in a while loop, with breaking condition as per your requirement. If there are no more events left to read, an EOF message will appear in case of file transport.

  2. Filtering events from a Binary log file:
    Imagine a case where you want to have all the queries executed on your server in a
    printable format at one place may be in some file, or you want to be notified whenever a binary log file is rotated(this can be identified by Rotate event). Similar to these there can be many other use cases where you are only interested in some events and don’t want to load yourself with extra information.In the last use case we saw how we can get an event object, now to filter event we can do this:
    The steps for getting the event object will be same as last example/*
    Filtering query_event and printing the query for each Query_event
    The type of event object is Binary_log_event which is the abstract class for all events
    to extract any data which is specific to that event We will need to have the appropriate
    cast.
    */
    if (event->get_event_type() == QUERY_EVENT)
    std::cout<<” The Query is “<<static_cast<Query_event*>(event)->query<<std::endl;
    And as this is real time, you can just run your application and all the queries will be printed as soon as they executed on the MySQL Server in case of a tcp transport.
  3. Building application to support heterogeneous replication in real time. Heterogeneous replication is where the two datastore participating in the replication process are different. There are multiple scenarios where this is beneficial, for example: Apache Kafka captures thedata from different streams and processes them, and imagine a scenario where we want to establish a replication system between MySQL server and Apache Kafka, so we can create an application using MySQL Binlog Events which will enable you to create a pipeline between MySQL Server and Apache Kafka, and replicate the data from MySQL to Apache Kafka in real time.

These are just some of the many places where MySQL Binlog Events can be used. This is a very generic framework so there will be many other use-cases. Let us know how you are using this library. Its always exciting to hear about the real use cases. If you have any comments or questions, please get in touch.
If you happen to find a bug, please file the bug at bugs.mysql.com.
This library is availabe at labs.mysql.com, choose MySQL Binlog Events 1.0.0 from the drop down menu.

Looking forward to the feedback.

MySQL Binlog Events – reading and handling information from your Binary Log

MySQL replication is among the top features of MySQL. In replication data is replicated from one MySQL Server (also knows as Master) to another MySQL Server(also known as Slave). MySQL Binlog Events is a set of libraries which work on top of replication and open directions for myriad of use cases like extracting data from binary log files, building application to support heterogeneous replication, filtering events from binary log files and much more.
All this in  REAL TIME .
INTRODUCTION
I have already defined what MySQL Binlog Events is, to deliver on any of the above use-cases, you first need to read the event from the binary log. This can be done using two types of transports:

1) TCP transport: Here your application will connect to an online MySQL Server and receive events as and when they occur.

2) File transport: As the name suggests the application will connect to an already existing file and read the events from it.

In the MySQL framework we always receive one event at a time irrespective of the transport , after the connection is established, we will start reading the events from the binary log files and hold the information in a buffer.

To do something useful with the event it needs to be decoded, and then the relevant information will be stored or processed based on the requirements of the application.
Event processing can be implemented in a very sophisticated manner using the content handlers  in MySQL Binlog Events which are designed specifically for that purpose.

The set of libraries in MySQL Binlog Events are all C++ libraries, this is a brief introduction of them

– libmysqlstream -> Methods related to connecting with the transport and receiving the buffer from the connection and processing the events using content handlers.
– libbinlogevents   -> Methods for decoding the  events after receiving them from any of the transports the application is connected via.

GETTING AND COMPILING THE SOURCE CODE
You can get a copy of source code from labs.mysql.com, select MySQL Binlog Events 1.0.0 from the drop down menu. After selecting that you will get an option to download either the source code or the binaries, for respective platforms.

If downloading the source code these are some prerequisite which you need to have on your machine in order to compile the source code successfully.

  • MySQL binaries of 5.7.6 or greater
    • To link to the libmysqlclient library.
    • The header files in the include directory are also used.
  • MySQL Source code of the same version as of binaries
    • To get few header files which are not part of the binaries and are being used
  • CMake 2.6 or greater
  • GCC 4.6.3 or greater

Steps for compiling MySQL Binlog Events source:

  1. Run the ‘cmake’ command on the parent directory of the MySQL Binlog Events source. This will generate the necessary Makefiles. Options to set while running CMake.
    1. Make sure to set cmake option ENABLE_DOWNLOADS=1; which will install Google Test which is required to run the unit
      tests.
    2. The option MYSQLCLIENT_STATIC_LINKING:BOOL=TRUE; will make the linking with libmysqlclient static
    3. You need to point to the MySQL Binary directory in order to make the library libmysqlclient and other header files
      visible, this is done by setting MYSQL_DIR={parent directory of the MySQL Binaries}.
    4. As I have already discussed that, we also need to include header files from the MySQL Source code, for that you need
      to do MYSQL_SOURCE_INCLUDE_DIR={Parent directory of MYSQL source}/include
      the full command for doing CMake iscmake . -DMYSQLCLIENT_STATIC_LINKING:BOOL=TRUE
      -DMYSQL_DIR= {Parent directory of MySQL binary}
      -DMYSQL_SOURCE_INCLUDE_DIR={Parent directory of MYSQL source}
      -DENABLE_DOWNLOADS=1Run the ‘cmake’ command on the parent directory of the MySQL Binlog Events source. This will generate the necessary Makefiles. Options to set while running CMake.
  2. Run make and make install to build and install, this will create the libraries libmysqlstream and libbinlogevents

That will be all, you are now ready to create your own application using MySQL Binlog Events.

For more details about the use cases and how to create a small application using MySQL Binlog Events refer to the  second blog in this series.

MySQL Cluster 7.4 New Features Webinar Replay

MySQL Cluster 7.4 GAI recently hosted a webinar introducing MySQL Cluster and then looking into what’s new in the latest version (MySQL Cluster 7.4) in some more detail. The replay of the MySQL Cluster 7.4 webinar is now available here. Alternatively if just want to skim through the charts then scroll down.

Abstract

MySQL Cluster powers the subscriber databases of major communication services providers as well as next generation web, cloud, social and mobile applications. It is designed to deliver:

  • Real-time, in-memory performance for both OLTP and analytics workloads
  • Linear scale-out for both reads and writes
  • 99.999% High Availability
  • Transparent, cross-shard transactions and joins
  • Update-Anywhere Geographic replication
  • SQL or native NoSQL APIs
  • All that while still providing full ACID transactions.

Understand some of the highlights of MySQL Cluster 7.4:

  • 200 Million queries per minute
  • Active-Active geographic replication with conflict detection and resolution
  • 5x faster on-line maintenance activities
  • Enhanced reporting for memory and database operations

Charts

Questions and Answers

  • Can all data be stored on disk rather than in memory? Any column that isn’t part of an index can be stored on disk if you want it to be. There is then an in-memory cache for the disk-based data.
  • Is in-memory data safe from the whole system shutting down (e.g. power loss for the whole data center)? The in-memory data is (asynchronously) checkpointed to disk (so that there is persistence but latency isn’t impacted).
  • I need to store more than 14K non BLOB/TEXT data in a single row – has this been addressed? As you say the options are to use TEXT/BLOB columns (or of course to split the data over multiple rows).
  • Can you comment on improvements of virtualized deploymets regarding the 7.4 version? Only to say that more and more people are deploying on VMs and we’re not seeing issues caused – if we do then they’ll be fixed
  • Can I upgrage from the previous version (7.3) to MySQL Cluster 7.4 or do I have to reinstall the product of the new version (7.4)? You can perform a rolling upgarade from MySQL Cluster 7.3 to MySQL Cluster 7.4 – the database stays uo throughout the process and you don’t lose any data or have to stop writing changes.