Strange Behaviour in NDB after reconfiguration
Hello everyone,
Just wondering if anyone had such a case in practice. I had to reconfigure my network and the IP address of my NDB nodes had to be changed. In order to do this, I modified the config.ini file and changed the addresses. After that I restarted the cluster and ran the ndb_mgmd with the --initial option.
Overall everything worked as planned, but I do have a very strange output in the management console:
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.2 (mysql-5.7.18 ndb-7.6.3, Nodegroup: 0, *)
id=3 @192.168.0.3 (mysql-5.7.18 ndb-7.6.3, Nodegroup: 0, *)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.1 (mysql-5.7.19 ndb-7.5.7)
[mysqld(API)] 3 node(s)
id=4 @192.168.0.2 (mysql-5.7.18 ndb-7.6.3)
id=5 @192.168.0.3 (mysql-5.7.18 ndb-7.6.3)
id=6 (not connected, accepting connect from 192.168.0.1)
Before the upgrade, only one of the nodes in the node group was marked with a star (as a master node), but suddenly I can see both nodes marked as masters, which is very strange. Has anyone ever encountered such a case and is there any action which needs to be taken in order to get the old behaviour back (where only a single node in the group was designated as master)?
The only solution I was able to find was to restart one of the nodes with the --initial option, but I'm a little reluctant to use this solution.
Any feedback would be greatly appreciated!
Subject
Views
Written By
Posted
Strange Behaviour in NDB after reconfiguration
967
August 04, 2018 11:33AM
535
August 09, 2018 08:59PM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.