Hi all,
I'm hoping you can help me. I'm setting up a new cluster with 2 hosts, db1 and db2. Once this is moved to a production enviornment, the management node will be on a 3rd host. I've gone through the install several times, and I can't get the mysqld nodes to connect. This is very frustrating, and I'm not really sure what the problem is or where to start looking. Here's my configs etc:
[root@db1 ~]# ndb_mgm -e show
Connected to Management Server at: db1:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @127.0.0.1 (Version: 5.0.18, starting, Nodegroup: 0, Master)
id=3 @192.168.168.135 (Version: 5.0.18, starting, Nodegroup: 0, Master)
[ndb_mgmd(MGM)] 1 node(s)
id=1 (Version: 5.0.18)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from db1)
id=5 (not connected, accepting connect from db2)
===========
[root@db1 ~]# ps aux | grep mysql
root 3441 0.0 0.0 4332 1080 pts/0 S 13:55 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/db1.hyperboy.com.pid
mysql 3461 0.0 0.4 133840 18488 pts/0 Sl 13:55 0:00 /usr/sbin/mysqld-max --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/db1.hyperboy.com.pid --skip-locking
===========
[root@db1 ~]# cat /etc/my.cnf
# Options for mysqld process:
[mysqld]
ndbcluster
ndb-connectstring=db1
# Options for ndbd process:
[mysql_cluster]
ndb-connectstring=db1
===========
[root@db1 ~]# cat /var/lib/mysql-cluster/config.ini
# Options affecting ndbd processes on all data nodes:
[NDBD DEFAULT]
NoOfReplicas= 2 # Number of replicas
DataMemory= 1G # How much memory to allocate for data storage
IndexMemory= 512M # How much memory to allocate for index storage
[NDB_MGMD DEFAULT]
# TCP/IP options:
[TCP DEFAULT]
#portnumber=2202 # This the default; however, you can use any
# port that is free for all the hosts in cluster
# Note: It is recommended beginning with MySQL 5.0 that
# you do not specify the portnumber at all and simply allow
# the default value to be used instead
# Management process options:
[NDB_MGMD]
hostname=db1 # Hostname or IP address of MGM node
ArbitrationRank=1 # high priority
datadir=/var/lib/mysql-cluster
#[NDB_MGMD]
#hostname=db2
#ArbitrationRank=2
#datadir=/var/lib/mysql-cluster
[NDBD]
hostname=db1 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's datafiles
[NDBD]
hostname=db2 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's datafiles
[MYSQLD]
hostname=db1
[MYSQLD]
hostname=db2
===========
[root@db1 ~]# ps aux | grep ndb
root 3312 0.0 0.2 13628 9520 ? Ssl 13:51 0:00 ndb_mgmd
root 3325 0.0 0.0 6528 1952 ? Ss 13:51 0:00 ndbd --initial
root 3326 0.0 5.4 1969368 223052 ? Sl 13:51 0:00 ndbd --initial
I do notice that the ndb nodes both say "starting". Is this unusual/bad? That's about all I've got. I really really appreciate any pointers/tips/solutions, I'm new at this whole cluster thing.
thanks,
Jonathan Nicol
jnicol@bluegecko.net