ndb table data length
I've noticed some odd behavior when it comes to NDB table data length. It doesn't seem to free memory when deleting large numbers of rows (5.0.18-max-x86_64):
create database memory_test;
use memory_test;
create table memory_hog (hog_id INT AUTO_INCREMENT PRIMARY KEY,hog_data VARCHAR(255)) ENGINE=NDB;
show table status like 'memory_hog';
(Shows 0 rows, 0 avg. length, and 0 data length)
I have a PHP script that inserts 65536 rows into this table:
65536 times "insert into memory_hog (hog_data) values('<random data>');"
show table status like 'memory_hog';
(Shows 65536 rows, 276 avg length, and 19791872 data length)
276 * 65536 = 18218936 v.s. 19791872 -> That's close enough (a little overhead is fine).
delete from memory_hog limit 32768;
show table status like 'memory_hog';
(Shows 32768 rows, 276 avg length, and 19791872 data length)
276 * 32768 = 9043968 v.s. 19791872?
delete from memory_hog limit 16384;
show table status like 'memory_hog';
(Shows 16384 rows, 276 avg length, and 19791872 data length)
Same thing, deleting 1/2 the rows: 8192, 4096, 2048, 1024: Data length stays the same at 19791872. The data length finally dropped when I got down to 512 rows.
This in itself isn't a problem, but we're getting this with tables that are filled with almost 500 megs of data - when we clear out 80% of the data, the table appears to continue to use all 500 megs.
Anyone know of a way to flush that empty memory out of the table?
Subject
Views
Written By
Posted
ndb table data length
2387
January 09, 2006 01:54PM
1406
January 09, 2006 06:06PM
1312
January 09, 2006 07:40PM
1340
January 09, 2006 08:00PM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.