MySQL Forums
Forum List  »  Backup

Backup strategy with Mysql
Posted by: Rony Pan
Date: December 09, 2015 08:25PM

Dear all,
We are designing our production system using Mysql community edition, as you all know Mysql Enterprise backup is not a solution in our case.
Currently we have only 2 servers, one master and one slave. All insert/update/delete operation will route to the master server, all select operation will route to the slave server, so on both server there will be workloads.
We need a solution which would not interfere with our transaction and with high performance for example, finish 500GB data backup in less than half an hour.
Our solution at hand is using MYSQLDUMP with --flush-logs,--master-data=2 and --single-transaction option, and we have the following concern:

1.for --single-transaction option, MYSQLDUMP will use CONSISTENT READ to read consistent data before backup time, CONSISTENT READ reads data from UNDO LOG, but UNDO LOG size is limited, we don't know if UNDO LOG must be frozen there until MYSQLDUMP complete or it could be discarded when some of them has already been read by MYSQLDUMP

2,Performance of mysqldump is not acceptable, we did a simple calculation that 500 GB data would take 8 hours to complete with single thread running, we didn't find any option which would enable MYSQLDUMP running in multi-threading mode. If we run several mysqldump together, we could not garantee that every mysqldump start at the same log position(Maybe on the slave server we could stop replication for a while then start all mysqldump, but it would cause data issues )

Could anyone help with a productive backup and recovery solution?

Options: ReplyQuote

Written By
Backup strategy with Mysql
December 09, 2015 08:25PM
December 10, 2015 10:57PM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.