MySQL Forums
Forum List  »  Replication

Re: replication with redhat clustering
Posted by: Rick James
Date: March 27, 2013 11:57PM

> data directory on different server (both node's mysql services (which will be active) will connect to this data directory.

I'm sorry; I find that to be an almost useless configuration.
At best, that assumes that the likely failure will be the cpu or motherboard, not the disk or network.

MySQL has no way to coordinate two two instances touching the same files. Each instance will do its on caching (for both writing and reading), hence it will make the wrong assumptions about where things are.

What kind of "load balancing" do you need? If you are heavy on reads, then one Master with multiple slaves will provide nearly unlimited read scaling. (There is no load balancer provided with MySQL; there are many hardware and software solutions for such.)

With Slave(s), you can take a slave offline (not accessed by clients; removed from the load balancer's list of valid readonly servers) and dump mysql or the entire disk.

Write scaling is a different matter, especially since you demand FOREIGN KEYs. Neither of these allow FKs: NDB Cluster, Percona XtraDB Cluster.

Consider two separate machines in a Master-Master setup, but with only one of them taking writes. By separate, I mean not sharing the same data -- either by having separate directories on the same SAN, or (better) by using local drives on each machine.

For Backup, disconnect the readonly Master and take the backup. Then re-attach it, wait for replication to catch up, then put it back "into rotation" behind the load balancer.

Options: ReplyQuote


Subject
Views
Written By
Posted
1527
March 20, 2013 03:24AM
Re: replication with redhat clustering
1052
March 27, 2013 11:57PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.