MySQL Forums
Forum List  »  NDB clusters

Network Separation with 3 Replicas?
Posted by: Adam Long
Date: July 22, 2020 01:14PM

I've been seeing the "minimum configuration" - 2 Data Nodes and 1 Management Node.

I also understand why you wouldn't want to put the Management Node on the same server as one of the Data Nodes - you'd end up in a situation where a single failure could take out the entire cluster.

*But*, with official support for NoOfReplicas=3, it seems like there's an obvious scenario that's not discussed - 3 Data Nodes, with the Management Node installed on the same server as one of the Data Nodes. Same number of servers involved as the canonical case.

However, it's not clear to me what is supposed to happen in the case of network separation (split-brain for anyone searching for posts with that term).

I was reading Pro MySQL NDB, and it had a handy flow-chart that described the process as it existed in 7.6: If the partition can see more than 50% of the data nodes, it keeps going. If it can't, it asks the arbitrator what to do.

Given that, let's say the split happens such that the server with a Data Node and the Management Node is in one partition, and the other partition contains just Data Nodes. One side has a minority of the Data Nodes and access to the Arbiter, the other access to the majority of the Data Nodes.

What is the outcome supposed to be?

Options: ReplyQuote


Subject
Views
Written By
Posted
Network Separation with 3 Replicas?
846
July 22, 2020 01:14PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.