Probably the best European conference on MySQL is coming this fall

The full schedule for Percona Live London 2014 was revealed earlier this month.
Congratulation to all selected speakers and a big thanks to those who submitted a talk.

I have many reasons to believe that this conference will be a very good year.
Let me explain why:

A community event made by community members

If you didn’t know who selected the talks for the conference, have a look to the conference committee page.
This committee is a perfect mixed between community members and evangelists.
As chairman of this conference, I can say that these guys made an amazing job to rate and comment this huge amount of  submitted talks.

I particularly want to point out that despite the commitment of some members of the committee for their companies, they played the game with a perfect fair play. They made smart choices with honesty and impartiality.

Also, Percona organizes the conference and selects the committee members.
But isn’t involved in any way on the choices made by the committee members, it should be highlighted.
That mean that the committee works independently, on a fair process, really.

I established a new process this year to reinforce this idea of ​​fair selection of the talks.
The committee members didn’t have access to the other members’s votes until the end of the rating period.
I found this method very interesting to force a spontaneous and personal choice from each committee members.
It was probably not perfect but if you have a look to the selected talks, you may find that the selection is particularly renewed this year.

if I have the opportunity to work on this event again next year, we probably should involve the whole community in this process.
I don’t know how we could make this possible but we have 1 year to think about. Any ideas?

One more thing: We did all this for fun.

Active sponsors

I don’t want to promote the sponsors here but remember that this conference couldn’t be possible without them.
I just want to point out that the sponsors are actively involved in the MySQL Community and its development.

You probably heard how GitHub works with MySQL or all the good news that Tokutek provides for MySQL and MongoDB.
Codership, Continuent, Devart and Severalnines are really active, as usual.
Pythian (included Blackbird) is a leader in the world of (big/small)data, don’t miss their blog.
And HGST works to improve integration between databases and SSD.

Good sponsors make good events!

A well balanced schedule

Of course, good topics make good events.

The predefined categories allowed to have a good sample of topics this year.
HA, security, performance, replication… and a lot of case studies, you have no excuse to not attend.

The tutorials and the talks offered this year are simply amazing.

Also, Facebook is back this year with good stuff, it could be interesting to learn how they manage their small data.
Booking, eBay, HP and Spil Games certainly have very interesting things to expose too.

And of course, all the latest news about Oracle and MariaDB.

You still have doubts? Book now and save money!

Hope to see you there next month.
Good luck and have fun!

PS: Don’t miss the (free) community dinner, because smart attendees make good events!

How to shrink the ibdata file by transporting tables with Trite

You’ve probably had some troubles with the shared InnoDB tablespace stored in the ibdata file. Especially when it has grown for some reasons and reached a critical size.

This behavior occurs in some cases, due to excessive rollback segments growth or during a migration from a unique shared tablespace to a file-per-table configuration for example.

In this post, I would like to explain how to shrink the ibdata file after an unwanted file growth in a file-per-table configuration.
Note that the process could be done without Trite but the tool avoids to write the script used to transport tables yourself.

Initial situation

Here is a sample of the InnoDB configuration:

innodb_data_file_path = ibdata1:100M:autoextend
innodb_file_per_table

And the status of your datafiles in the datadir directory:

drwx------ 2 mysql mysql 4,0K déc. 20 2012 performance_schema
drwxrwx--- 2 mysql mysql 4,0K déc. 20 2012 mysql
drwxrwx--- 2 mysql mysql 4,0K juin 10 23:58 DB_INNO_FPT_1
drwxrwx--- 2 mysql mysql 4,0K sept. 8 15:53 DB_INNO_FPT_2
-rwxrwx--- 1 mysql mysql 317G sept. 8 23:37 ibdata1

The ibdata file size is 317Go and you want to recover this lost space.

What is Trite?

Trite is a client/server tool that provides customizable transport of binary InnoDB table files.
The tool was developed by Joshua Prunier and is available on GitHub.

The tool allows to connect a client to your database backup (XtraBackup) and stream the files to another database.
Again, you can do that without Trite, but this tool automate the manual process.

Also, this procedure is much quicker than traditional mysqldump restores when the tables size becomes very large.

Finally, Trite can be useful in many use cases, it deserves your attention.

Why Trite?

Because in this case the dataset is too big to use a traditional export/import with mysqldump.
We have to copy very large InnoDB tables and Trite could help to da that with ease.

Also, we don’t simply want to restore a backup, we want to retrieve a ibdata file with a reasonable size.
So, we have to copy each table files from the backup to a new clean MySQL instance.

Pre-requisites

You need a spare server to restore the backuped files.
The source server should be master ready (binlog, server_id…)
Note that you can use a single server but the load generated by the process could be a problem on a production server.

Trite is written in Go, so, you need Go and a Go MySQL driver to install Trite on your server.
The Trite installation process is clearly detailed on the GitHub page of the project.

All tables must be InnoDB with the innodb_file_per_table configuration activated.

Xtrabackup is needed to perform the physical database backup.
Note that you must have Percona Server (5.1 to 5.6), Oracle MySQL 5.6 or MariaDB (5.5 to 10) for the target MySQL instance.

Overview of the procedure

We’ll copy the InnoDB tables from the source server (aka S) with the larger ibdata file to the target server (aka T).
I consider that MySQL is installed and configured on both servers.

Here is an overview of the procedure:

  • Install Trite on both servers (not detailed here)
  • [S] Make a backup of the source database with XtraBackup
  • [S] Apply logs with the --export option
  • [S] Make a dump of your database objects with Trite
  • [S] Start a Trite server pointing on the previous backup and dump directories
  • [T] Use the Trite client to restore all the tables in a brand new MySQL instance
  • [T] Configure the spare server as a slave and wait for the replication lag (not detailed here)
  • Switch your application from [S] to [T] or copy the fresh database on the source server (not detailed here)

Detailled procedure

Backup the source database with XtraBackup

On the source server, you have to make a full physical backup of your databases.
The backup process is not detailed here, please, refer to the documentation for the details on the installation and use.

Output example:

# innobackupex --user=root --password=pass /backup
... output truncated ...
innobackupex: Backup created in directory '/backup/2014-01-25_14-22-06'
140125 14:30:29 innobackupex: Connection to database server closed
140125 14:30:29 innobackupex: completed OK!

Backup files are generated in the /backup/2014-01-25_14-22-06 directory.

Apply logs on the backuped files

To allow transport tablespaces, you have to use the export option as follow:

# innobackupex --apply-log --use-memory=1G --export /backup/2014-01-25_14-22-06
... output truncated ...
140125 14:33:03 InnoDB: Starting shutdown...
140125 14:33:07 InnoDB: Shutdown completed; log sequence number 70808870412
140125 14:33:07 innobackupex: completed OK!

Make a dump of the database objects with Trite

On the source server, use Trite to make a dump of the database objects.
This step will create a directory that contains all the scripts to create databases and objects (tables, views, procedures…)

# trite -dump -user=root -password=pass -host=localhost
Dumping to: /backup/localhost_dump20140125173337
DB_INNO_FPT_1: 21 tables, 0 procedures, 0 functions, 0 triggers, 0 views
DB_INNO_FPT_2: 42 tables, 0 procedures, 0 functions, 0 triggers, 3 views
63 total objects dumped
Total runtime = 2.19331264s

This scripts will be used by the Trite client to create databases and tables on the target server.

Start the Trite server

On the source server, start the Trite server that allow the copy of the data files.
It could be better to start the server in a tmux or a screen.

You have to specified where are the backup and dump files:

# trite -server -dump_path=/backup/localhost_dump20140125173337 -backup_path=/backup/2014-01-25_14-22-06
Starting server listening on port 12000

Restore the databases and tables

On the target server, MySQL is running with a fresh instance.
The ibdata file is just fine for this instance (size depending of your setup).

The Trite client works as expected:

  • Create the databases and objects (tables, views…)
  • Copy the data files from the source server to the target server
  • Import the tablespaces for each table on the target MySQL instance
# trite -client -user=root -password=pass -socket=/var/lib/mysql/mysql.sock -server_host=localhost
DB_INNO_FPT_1.T1 has been restored
DB_INNO_FPT_1.T2 has been restored
DB_INNO_FPT_1.T3 has been restored
... output truncated ...
Applying triggers for DB_INNO_FPT_1
Applying views for DB_INNO_FPT_1
Applying procedures for DB_INNO_FPT_1
Applying functions for DB_INNO_FPT_1
Total runtime = 13m3.881954844s

All the databases and tables are now available in the target instance.
Note that you can run the client with multiple worker threads.

Attach the target server as a slave

XtraBackup created a file named xtrabackup_binlog_info on the backup directory on the source server.
This file contains the binary log file name and position of the exact point in the binary log to which the prepared backup corresponds.

Use the information stored in this file to setup the replication between the servers.
The process to configure a replication slave is not detailed here. You should find that easily through the World Wide Web.

Final steps

Just wait to have a replication slave up to date and chose your favorite switching method:

  • Switch your application from [S] to [T]
  • Restore [T] datafiles on [S]
  • Your way…

And don’t forget to add a maximum size for the ibdata file in your MySQL configuration file:

innodb_data_file_path = ibdata1:100M:autoextend:max:10G

What can you expect?

The performance of this process depends of the size of your tables, how fast your disks are and how fast your network is.
Now, be sure this process will be very efficient for huge datasets.

Again, Trite could be useful for many other use cases, all feedbacks are welcome.

Finally, I would like to thanks Joshua for the tool and the talks about how to improve Trite.

Source: http://joshuaprunier.blogspot.fr/2014/02/introducing-trite-tool-for-automating.html

Time to forget show processlist for monitoring?

Disclaimer: I’m not specially an expert of benchmarking, this post is to compare different options. All comments and advices are welcome.

I’m not telling you anything new, the show processlist command is a fantastic command line tool for instant check.
But what about monitor your databases with this command embedded in a tool?

Just have a look at this graph:

Screenshot 2014-06-23 at 10.09.38 PM

With 5K queries per seconds, how much will be hidden with a show processlist executed every seconds? Probably a lot.
So, I wanted to test which alternatives could be efficient to retrieve all the queries during a time lapse.

Test procedure and configuration

I used sysbench 0.5 (with oltp.lua) to make my tests on different configurations and tools.
My test server is a 20 cores (hyper-threaded) server with 128Go of RAM and a RAID 10 disk setup.
These tests are based on Percona Server 5.5.31.
I tested the amount of TPS from 1 to 64 concurrent threads.

Here is the standard benchmark graph (TPS):

0a7dffb4f4105bbaecefd912495f7447

Slow query log set to 0 (file output)

d4c4ef22f54e29ebbcfacd3e7395c4b1

Estimated size of the file: 52Go
We reach a stable peak of 1000 transactions per second.

Slow query log set to 0 (table output)

9e9ba5be3a12928fc4ea876c688c1bcf

Estimated size of the table: 8.9Go
Table contention is probably the cause of a stabilization around 600 transactions per second.

General log in a file

806e583c9db079f5ebf7846a27e92a6f

Estimated size of the file: 11Go
2000 transactions per second can be hoped here.

General log in a table

d49dbc548a3fa124ced9b58038581f74

Estimated size of the table: 9.9Go
Again, the results are below compared to the previous results in a file.

Infinite show processlist in a file

62adf7a0696c51338e217ac2bb2894ac

Estimated size of the file: 1.1Go
The results don’t seem so dramatic but not sure all the queries were grabbed in the file.

MariaDB Audit Plugin in a file

c223b317a1d2d8e2f14002d1666fcd6f

Estimated size of the file: 14Go
plugin-load=server_audit=server_audit.so
server_audit_events             = QUERY
server_audit_file_path          = /var/log/mysql/audit.log
server_audit_logging            = ON
server_audit_output_type        = file

MacAfee Audit Plugin (default configuration)

526777e9fa90b1b236b453519d616145

Estimated size of the file: 22Go

audit_offsets=6480, 6528, 4080, 4520, 104, 2584
audit_json_file=1
audit_json_socket_name=/tmp/audit.sock
audit_json_log_file=/var/log/mysql/mysql-audit.log
audit_record_cmds=select,insert,update,delete

MacAfee Audit Plugin (via socket)

fb244656047883f4eeab8a4bf046caf3

Estimated size of the file: 22Go

audit_offsets=6480, 6528, 4080, 4520, 104, 2584
audit_json_file=1
audit_json_socket=1
audit_json_socket_name=/tmp/audit.sock
audit_json_log_file=/var/log/mysql/mysql-audit.log
audit_record_cmds=select,insert,update,delete

MacAfee Audit Plugin (via socket + file sync 1000)

706b1033314f4ef6c774688bc618a612

Estimated size of the file: 22Go

audit_offsets=6480, 6528, 4080, 4520, 104, 2584
audit_json_file=1
audit_json_socket=1
audit_json_socket_name=/tmp/audit.sock
audit_json_log_file=/var/log/mysql/mysql-audit.log
audit_record_cmds=select,insert,update,delete
audit_json_file_sync=1000

Performance Schema activated

eb1ba3455682c347cf34cbaf53aad57a

performance_schema
performance_schema_events_waits_history_long_size = 10000
performance_schema_events_waits_history_size = 1000

Conclusions

Auditing a system necessarily has an impact on your system.
The tools and options presented here can provide various information.
Also, you have to consider how you’ll handle these data after having collected them.

The show processlist command don’t seem so dramatic for performance but not guarantee that all the queries will be retrieve.
You probably have to chose another solution if you really need to collect reliable data.

Hope these quick tests could help you to chose the right tool for your needs.

Wake up European DBA, call for papers for Percona Live London 2014 is open!

Call for papers for Percona Live London 2014 is open. For the fourth consecutive year, PLUK is going to be one of the best community event in Europe.
I have the honour of being conference committee chairman and the hard task to reviewing the talks with my colleagues of the committee.

First, let me introduce the committee members:

  • Art van Scheppingen (Spil Games)
  • Nicolai Plum (Booking.com)
  • Luis Motta Campos (Ebay Classifieds Group)
  • Colin Charles (MariaDB)
  • David Busby (Percona)
  • Morgan Tocker (Oracle)
  • Cédric PEINTRE (Dailymotion)

Amazing, isn’t it?! I think we couldn’t have a better committee for a community event.
I’m very glad to take part in the adventure with you guys!

And if you wonder what the committee does, please, read this review from Shlomi.
You should know that the committee is fully independent. We are working hard to offer you the best conference and being as fair as possible, with our beliefs, with fun.

Now, it’s time to submit your talks and tutorials. Like I said last year, If you are using databases, you might have something to say.
We are looking for the best use cases in relation to these topics, surprise us!

  • High Availability
  • DevOps
  • Programming
  • Performance Optimization
  • Replication & Backup
  • MySQL in the Cloud
  • MySQL and NoSQL
  • MySQL Case Studies
  • Security
  • What’s New in MySQL

Tell us about your life, I’m sure you have a ton of stories to tell us.
We would love to have new faces, new companies and new topics this year again.
Don’t be shy, everyone is eager to drink your words.

Read the guidelines and submit your talk now!

Looking forward to meet you in London in November.
Cheers

[Tuto] A real life Multi-Master use case with MariaDB

The goal of this tutorial is to show you how to use Multi-Master and aggregate databases with the same name but with different data from different masters.

Example:

  • master1 => a French subsidiary
  • master2 => a British subsidiary

Both have the same database (PRODUCTION) but the data are totally different:

ScreenShot038We will start with 3 servers (2 masters and 1 slave), you can add more master if needed.

Informations

  •  10.10.16.231: first master (aka ”’master1”’) => a French subsidiary
  • 10.10.16.232: second master (aka ”’master2”’) => a British subsidiary
  • 10.10.16.233: slave (multi-master) (aka ”’slave”’)

If you have already your 3 servers correctly installed go directly to: “Dump your databases”

Default installation on 3 servers

apt-get -y install python-software-properties
apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db

The main reason why I put the apt configuration in a different file is that we use a configuration manager and this one overwrite /etc/apt/sources.list.
Also, if any trouble occurs just remove this file and you restart with the default configuration.

echo "deb http://mirror.stshosting.co.uk/mariadb/repo/10.0/ubuntu precise main" > /etc/apt/sources.list.d/mariadb.list
apt-get update
apt-get install mariadb-server

The goal of this small script is to get the IP of the server and make a CRC32 from this IP to generate one unique server-id.
Generally the crc32 command isn’t installed, so, we will use the MySQL function instead.
Even if your server have more interface you should have no trouble because the IP should be unique.

user=`egrep user /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d '\n'`
passwd=`egrep password /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d '\n'`
ip=`ifconfig eth0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}' | head -n1 | tr -d '\n'`
crc32=`mysql -u $user -p$passwd -e "SELECT CRC32('$ip')"`
id_server=`echo -n $crc32 | cut -d ' ' -f 2 | tr -d '\n'`

Find out a minimalist configuration file. Consider to use your own configuration.
If you are interested by my default MariaDB 10 configuration at Photobox, just click here

cat >> /etc/mysql/conf.d/mariadb10.cnf << EOF
[mysqld]
 character-set-client-handshake = FALSE
 character-set-server = utf8
 collation-server = utf8_general_ci
 bind-address = 0.0.0.0
 external-locking = off
 skip-name-resolve
 #make a crc32 of ip server
 server-id=$id_server
 #to prevent auto start of thread slave
 skip-slave-start
[mysql]
 default-character-set = utf8
EOF

Then, restart the server:

/etc/init.d/mysql restart
* Stopping MariaDB database server mysqld [ OK ]
* Starting MariaDB database server mysqld [ OK ]
* Checking for corrupt, not cleanly closed and upgrade needing tables.

Repeat these actions on the 3 servers.

Create users on both master

Create replication’s user on both master

On master1:

mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd'"

On master2:

mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd'"

Create an user for external backup

On master1 and on master2:

mysql -u root -p -e "GRANT SELECT, LOCK TABLES, RELOAD, REPLICATION CLIENT, SUPER ON *.* TO 'backup'@'%' IDENTIFIED BY 'passwd' WITH GRANT OPTION"

Only for testing

Considering you didn’t have a such configuration and you want to test it

Create a database on master1:

 master1 [(none)]> CREATE DATABASE PRODUCTION;

Create a database on master2:

 master2 [(none)]> CREATE DATABASE PRODUCTION;

Load slave with master data

All the commands until the end have to be done on the slave server:

* –master-data=2: Get the binary log and the position, and put it at the beginning of the dump in comment
* –single-transaction: This option issues a BEGIN SQL statement before dumping data from the server (works only with InnoDB tables)

mysqldump -h 10.10.16.231 -u root -p --master-data=2 --single-transaction PRODUCTION > PRODUCTION_10.10.16.231.sql
mysqldump -h 10.10.16.232 -u root -p --master-data=2 --single-transaction PRODUCTION > PRODUCTION_10.10.16.232.sql

Create both new databases:

slave[(none)]> CREATE DATABASE PRODUCTION_FR;
slave[(none)]> CREATE DATABASE PRODUCTION_UK;

Then load the data:

mysql -h 10.10.16.233 -u root -p PRODUCTION_FR < PRODUCTION_10.10.16.231.sql
mysql -h 10.10.16.233 -u root -p PRODUCTION_UK < PRODUCTION_10.10.16.232.sql

Set up replication on slave

Edit both dump files and get the name and the position of the binlog and replace it as following:

French subsidiary:

less PRODUCTION_10.10.16.231.sql

Get the line: (file and position will be different)

-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

Replace name and position in this command:

CHANGE MASTER 'PRODUCTION_FR' TO MASTER_HOST = "10.10.16.231", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

English subsidiary

less PRODUCTION_10.10.16.232.sql

Get the line: (file and position will be different)

-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

Replace name and position in this command:

CHANGE MASTER 'PRODUCTION_UK' TO MASTER_HOST = "10.10.16.232", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

Rules of replication on config file

Unfortunatly the option replicate-rewrite-db doesn’t exist as a variable and we can’t set up this configuration without restarting the slave server.
Add the following lines to /etc/mysql/conf.d/mariadb10.cnf on the slave:

PRODUCTION_FR.replicate-rewrite-db="PRODUCTION->PRODUCTION_FR"
PRODUCTION_UK.replicate-rewrite-db="PRODUCTION->PRODUCTION_UK"
PRODUCTION_FR.replicate-do-db="PRODUCTION_FR"
PRODUCTION_UK.replicate-do-db="PRODUCTION_UK"

Then you can restart MySQL (Don’t forget to launch the slaves because we skipped it at start):

/etc/init.d/mysql restart

Start replication:

* one by one:

START SLAVE 'PRODUCTION_FR';
START SLAVE 'PRODUCTION_UK';

* all together:

START ALL SLAVES;

Check the replication:

slave[(none)]>SHOW SLAVE 'PRODUCTION_UK' STATUS;
slave[(none)]>SHOW SLAVE 'PRODUCTION_FR' STATUS;
slave[(none)]>SHOW ALL SLAVES STATUS;

Tests

On slave:

slave [(none)]> use PRODUCTION_FR;
Database changed
slave [PRODUCTION_FR]> show tables;
Empty set (0.00 sec)
slave [(none)]> use PRODUCTION_UK;
Database changed
slave [PRODUCTION_UK]> show tables;
Empty set (0.00 sec)

On master1:

master1 [(none)]> use PRODUCTION;
Database changed
master1 [PRODUCTION]>CREATE TABLE `france` (id int);
Query OK, 0 rows affected (0.13 sec)
master1 [PRODUCTION]> INSERT INTO `france` SET id=1;
Query OK, 1 row affected (0.00 sec)

On master2:

master2 [(none)]> use PRODUCTION;
Database changed
master2 [PRODUCTION]>CREATE TABLE `british` (id int);
Query OK, 0 rows affected (0.13 sec)
master2 [PRODUCTION]> INSERT INTO `british` SET id=2;
Query OK, 1 row affected (0.00 sec)

On slave:

-- for FRANCE
slave [(none)]> use PRODUCTION_FR;
Database changed
slave [PRODUCTION_FR]> show tables;
 +-------------------------+
 | Tables_in_PRODUCTION_FR |
 +-------------------------+
 | france |
 +-------------------------+
 1 row in set (0.00 sec)
slave [PRODUCTION_FR]> SELECT * FROM france;
 +------+
 | id |
 +------+
 | 1 |
 +------+
 1 row in set (0.00 sec)
 
-- for British
 slave [(none)]> use PRODUCTION_UK;
 Database changed
 slave [PRODUCTION_UK]> show tables;
 +-------------------------+
 | Tables_in_PRODUCTION_UK |
 +-------------------------+
 | british |
 +-------------------------+
 1 row in set (0.00 sec)
slave [PRODUCTION_UK]> SELECT * FROM british;
 +------+
 | id |
 +------+
 | 2 |
 +------+
 1 row in set (0.00 sec)

It works!

Limitations

WARNING: it doesn’t work with the database specified in query.
This work fine:

 USE PRODUCTION;
 UPDATE `ma_table` SET id=1 WHERE id =2;

This query will break the replication:

 USE PRODUCTION;
 UPDATE `PRODUCTION`.`ma_table` SET id=1 WHERE id =2;

=> database `PRODUCTION` does not exist on this server.

Real examples

Missing update

On master1:

master1 [(none)]>UPDATE `PRODUCTION`.`france` SET id=3 WHERE id =1;
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0
master1 [(none)]> select * from `PRODUCTION`.`france`;
 +------+
 | id |
 +------+
 | 3 |
 +------+
 1 row in set (0.00 sec)

On slave:

slave [PRODUCTION_FR]> SELECT * FROM france;
 +------+
 | id |
 +------+
 | 1 |
 +------+
 1 row in set (0.00 sec)

In this case we missed the update.
it’s a real problem because the replication should crash, the slave is desynchronized with master1 and we didn’t know it.

Replication crash

On master1:

master1[(none)]> use PRODUCTION;
Database changed
master1 [PRODUCTION]> SELECT * FROM`PRODUCTION`.`france`;
 +------+
 | id |
 +------+
 | 3 |
 +------+
 1 row in set (0.00 sec)
master1 [PRODUCTION]>UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
master1 [PRODUCTION]> SELECT * FROM `PRODUCTION`.`france`;
 +------+
 | id |
 +------+
 | 4 |
 +------+
 1 row in set (0.01 sec)

On pmacli:

ScreenShot039

On slave:

slave [PRODUCTION_FR]> show slave 'PRODUCTION_FR' status\G
*************************** 1. row ***************************
...
Slave_IO_State: Waiting for master to send event
Last_SQL_Error: Error 'Table 'PRODUCTION.france' doesn't exist' on query. Default database: 'PRODUCTION_FR'. Query: 'UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3'
Replicate_Ignore_Server_Ids:
Master_Server_Id: 2370966657
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
1 row in set (0.00 sec)

We got the error!

 


 

PhotoBox_NewLogo_jan07_1

Author: Aurélien LEQUOY
License: This article is under : ”’The GNU General Public License v3.0”’ http://opensource.org/licenses/GPL-3.0