Table fragmentation and optimization
Posted by:
Tom
Date: June 25, 2005 08:09AM
Hi,
I have an application involving quite high read and write rates to a 13-million row myISAM table. Rows are only about 100 bytes on average. Reads peak at about 70 per sec, writes at 4-6 per sec. The table is growing by 70000 rows a day, and currently uses about 1GB. Lots of rows are pruned in chunks randomly.
The table is dynamic but since I never do UPDATEs on it, I don't think row fragmentation is a huge problem. Static would waste a lot of space (up to 500 bytes per row). But, QUESTION 1: would static still be preferable, or does that slack space consume RAM and disk read / write time? The actual disk usage isn't a problem.
I want to reduce disk IO waits, so I thought splitting the table might be a solution -- I can put some parts on one drive and some on another. No RAID. My application neatly allows for the table to be split anyway.
But how does myISAM handle _table_ fragmentation? If I have 4 parts, 2 on each drive, what happens when a row is written to part 1 on drive 1, then part 2 on drive 1, then part 1 again.... looks like monster fragmentation. So QUESTION 2: can myISAM preallocate space? Is there some trick I can use? Or will I have to settle for 1 part on each drive?
Delayed inserts aren't an option, because I need to be able to read the data immediately after its INSERTed.
I've considered InnoDB, but I don't know if it'll actually provide better performance overall. I don't really have room to experiment. Any thoughts would be welcome.
Thanks!
Thomas
Subject
Views
Written By
Posted
Table fragmentation and optimization
9089
June 25, 2005 08:09AM
3371
August 04, 2005 11:57AM
3470
August 04, 2005 02:03PM
5676
December 04, 2008 01:29PM
2965
December 07, 2008 01:53PM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.