MySQL Forums
Forum List  »  NDB clusters

Pls help me on cluster 5.1.14 - Major issue is occured
Posted by: Nilnandan Joshi
Date: January 03, 2007 08:31AM

Hi,

I have created cluster environment on 4 pc
(THIS IS THE TESTING ENVIRONMENT)

1-SQL node (512 RAM)
2-MGM node (512 RAM)
3-data node-1 (2GB RAM)
4-datanode-2 (2GB RAM)

my config.ini is like this:

[NDBD DEFAULT]
NoOfReplicas= 2
RedoBuffer=32M
TimeBetweenLocalCheckpoints=6
NoOfFragmentLogFiles=32
DataDir= /var/lib/mysql-cluster
DataMemory = 512M
IndexMemory = 64M
MaxNoOfConcurrentTransactions = 500
MaxNoOfConcurrentOperations = 250000

[NDB_MGMD]
Id=1
HostName= 172.18.1.139

[NDBD]
Id=3
HostName=172.18.1.140

[NDBD]
Id=4
HostName=172.18.1.141

[TCP DEFAULT]

[TCP]
NodeId1=3
NodeId2=4
HostName1=172.18.1.140
HostName2=172.18.1.141

[MYSQLD DEFAULT]

[MYSQLD]
Id=2
HostName=172.18.1.138

[MYSQLD]
Id=5

For disk space storage i have created log group and tablespace like

CREATE LOGFILE GROUP lg_1
ADD UNDOFILE 'undo_1.dat'
INITIAL_SIZE 32M
UNDO_BUFFER_SIZE 8M
ENGINE NDB;

CREATE TABLESPACE ts_1
ADD DATAFILE 'data_1.dat'
USE LOGFILE GROUP lg_1
INITIAL_SIZE 200M
ENGINE NDB;

ALTER TABLESPACE ts_1
ADD DATAFILE 'data_2.dat'
INITIAL_SIZE 200M
ENGINE NDB;

NOW I have created 1 database on cluster named db_1. It has just 1 table name temp_1 like

CREATE TABLE temp_1
(id bigint(64) NOT NULL,
name varchar(32) NOT NULL,
address text NOT NULL,
location varchar(255) NOT NULL,
jtype varchar(100) NOT NULL,
married char(1) NOT NULL,
sex char(1) NOT NULL,
bdate timestamp NOT NULL,
id_2 bigint(64) NOT NULL)
TABLESPACE ts_1 STORAGE DISK
ENGINE=ndbcluster DEFAULT CHARSET=latin1 COMMENT='Job Keywords Information';

But now, I can just add 253,184 records in this table then after it gives error like:

Error: 'The table temp_1 is full';

I have checked the NDB_DESC its output is:

[root@rhel4mysql2 mysql-cluster]# ndb_desc -c localhost temp_1 -d db_1 –p
-- temp_1 --
Version: 33554435
Fragment type: 5
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 10
Number of primary keys: 1
Length of frm data: 419
Row Checksum: 1
Row GCI: 1
TableStatus: Retrieved
-- Attributes --
id Bigint NOT NULL AT=FIXED ST=DISK
name Varchar(32;latin1_swedish_ci) NOT NULL AT=FIXED ST=DISK
address Text(256,2000;16;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
location Varchar(255;latin1_swedish_ci) NOT NULL AT=FIXED ST=DISK
jtype Varchar(100;latin1_swedish_ci) NOT NULL AT=FIXED ST=DISK
married Char(1;latin1_swedish_ci) NOT NULL AT=FIXED ST=DISK
sex Char(1;latin1_swedish_ci) NOT NULL AT=FIXED ST=DISK
bdate Timestamp NOT NULL AT=FIXED ST=DISK
id_2 Bigint NOT NULL AT=FIXED ST=DISK
$PK Bigunsigned PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY

-- Indexes --
PRIMARY KEY($PK) - UniqueHashIndex

in this output we can see that text field is stored in memory
why????????????

and output of all dump 1000 is:

root@rhel4mysql2 mysql-cluster]# tail -12 ndb_1_cluster.log

2006-07-28 02:22:19 [MgmSrvr] INFO -- Node 3: Data usage is 83%(13617 32K pages of total 16384)
2006-07-28 02:22:19 [MgmSrvr] INFO -- Node 3: Index usage is 9%(814 8K pages of total 8224)
2006-07-28 02:22:19 [MgmSrvr] INFO -- Node 4: Data usage is 82%(13531 32K pages of total 16384)
2006-07-28 02:22:19 [MgmSrvr] INFO -- Node 4: Index usage is 9%(814 8K pages of total 8224)
[root@rhel4mysql2 mysql-cluster]#

when I calculated the size the records, size is just upto 85MB.


now what should i do to increase the size of table ..????
I want to add up to 4,000,000 records.. can i insert it ????
If yes then what should i do ???? how and where????
currently i can not get good performance how can i increase d performance of this cluster environment?????

Pls give me answer ASAP....

Thanks in advance...

Nilnandan Joshi

Options: ReplyQuote


Subject
Views
Written By
Posted
Pls help me on cluster 5.1.14 - Major issue is occured
1505
January 03, 2007 08:31AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.