MySQL Forums
Forum List  »  NDB clusters

multi mgmd bug
Posted by: He Ming
Date: May 04, 2006 03:35PM

My config.ini
# Options affecting ndbd processes on all data nodes:
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=150M
IndexMemory=50M

# TCP/IP options:
[TCP DEFAULT]
portnumber=2202

# Management process options:
[NDB_MGMD]
Id=1
hostname=192.168.0.141
datadir=/usr/local/mysql-cluster

[NDB_MGMD]
Id=2
hostname=192.168.0.142
datadir=/usr/local/mysql-cluster


# Options for data node "A":
[NDBD]
Id=3
hostname=192.168.0.141
datadir=/usr/local/mysql/var

# Options for data node "B":
[NDBD]
Id=4
hostname=192.168.0.142
datadir=/usr/local/mysql/var

# SQL node options:
[MYSQLD]
Id=5
hostname=192.168.0.143

cluster normal start show information:

Connected to Management Server at: 192.168.0.141:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.0.141 (Version: 4.1.19, Nodegroup: 0)
id=4 @192.168.0.142 (Version: 4.1.19, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.141 (Version: 4.1.19)
id=2 @192.168.0.142 (Version: 4.1.19)

[mysqld(API)] 1 node(s)
id=5 @192.168.0.143 (Version: 4.1.19)

note 192.168.0.142 is NDBD Master

if me press 192.168.0.142's reset button or pull out RJ45, 192.168.0.141's NDBD is crash,no error show,version 5.0 and 5.1 show "error 2305".mysqld is normal,but 192.168.0.141's mgmd can not connect it。

Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 (not connected, accepting connect from 192.168.0.141)
id=4 (not connected, accepting connect from 192.168.0.142)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.141 (Version: 4.1.19)
id=2 (not connected, accepting connect from 192.168.0.142)

[mysqld(API)] 1 node(s)
id=5 (not connected, accepting connect from 192.168.0.143)

now...i resume 192.168.0.142 to online、start 192.168.0.141's NDBD later...all ok

Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.0.141 (Version: 4.1.19, Nodegroup: 0, Master)
id=4 @192.168.0.142 (Version: 4.1.19, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.141 (Version: 4.1.19)
id=2 @192.168.0.142 (Version: 4.1.19)

[mysqld(API)] 1 node(s)
id=5 @192.168.0.143 (Version: 4.1.19)

now....192.168.0.141 is NDBD Master, i shutdown 192.168.0.142 again,redundance succeed!

Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.0.141 (Version: 4.1.19, Nodegroup: 0, Master)
id=4 (not connected, accepting connect from 192.168.0.142)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.141 (Version: 4.1.19)
id=2 (not connected, accepting connect from 192.168.0.142)

[mysqld(API)] 1 node(s)
id=5 @192.168.0.143 (Version: 4.1.19)

but i kill Master NDBD's ndbd process later,redundance can succeed,this is soft mode,if use hard mode(power off、press reset、halt network),redundance can not succeed!

this Bug?of limit?

Options: ReplyQuote


Subject
Views
Written By
Posted
multi mgmd bug
3668
May 04, 2006 03:35PM
1290
May 04, 2006 03:37PM
1416
May 04, 2006 11:10PM
1266
May 05, 2006 01:18AM
1285
May 08, 2006 12:17AM
1267
May 06, 2006 05:48AM
1348
May 06, 2006 06:40AM
1199
May 08, 2006 12:18AM
1232
May 08, 2006 04:03AM
1212
May 09, 2006 09:12AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.