MySQL Forums
Forum List  »  Performance

Problem with Lock - Million of Data - Table partition
Posted by: Vinicius Marques De Bernardi
Date: May 25, 2005 09:02AM

Hi everybody,

I had a big table, that I already make a description here, and they suggest me trying use InnoDB, but in some test of INSERT inno is very slowly that what I need... so I decide to continuous with my MyISAM tests.

Just to remember, I have a table with more or less 2000 records of 1 vehicle per day, some servers that my product is running , has 2000 up to 5000 vehicles. These table has GMT / Lat / Long / IO information data about these vehicles.

I'm using these configuration on my server :

[mysqld]
basedir=C:/mysql
max_connections=20
datadir=C:/mysql/data
skip-locking
key_buffer_size = 512M
table_cache = 256
max_allowed_packet = 16M
sort_buffer_size = 128M
net_buffer_length = 8K
read_buffer_size = 1M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
bulk_insert_buffer_size = 64M
query_cache_size = 16M
thread_cache = 8
multiprocessados
thread_concurrency = 4

These server has 2 processors Xeon 2.8 HT , with 1Gb RAM and 2 disks with 36Gb using MIRROR.

I have 2 applications that make insert in these table.
I don't use delete normally, just in the end of the month , is when I delete the oldest month , and with these model I had the last 2 months of that, that represents to me 45.000.000 rows in a server with 500 vehicles.
So I just have inserts ( 2 connections - concurrenctily ) and many selects, these selects are indexed, and return normally 1 day of data, about 1 vehicle, something like 2000 rows.

Now I want to now how to reduces the number of locks of the table, because, using mysql monitor I had detect the my number of connections locked are two big, in some times. ( I don't use LOCK command - is automatic locked by the database )

I had read in a manual, that myisam support concurrenctily inserts , but when there are holes in table it didn't work. So I had a doubt with my table, when I delete 1 mounth of data ( in the start of the table ) it causes a hole?

I appreciate any suggestions

More 1 question

I thinking to partition these table in 1 day of data per table something like

table0101 ( january 01 )
...
table1231 ( december 31 )

And make a pre index ( by day ) in my SQL's so I will reduce amount of data in a table. And problably reduce the locks two.

It's a good idea? ( increase the database with 366 tables it can's give me a problem? - testing with mysql front give me some problems of delay, when I open the database )


Thanks

Vinicius Marques De Bernardi

Vinicius Marques De Bernardi
MAXTRACK - Brasil

Options: ReplyQuote


Subject
Views
Written By
Posted
Problem with Lock - Million of Data - Table partition
2292
May 25, 2005 09:02AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.