Slow Repair Table with Large Unique Keys
Hello,
I know searching the web there's a decent amount on this, but I am stuck.
We used to at my company use mysql 4.1, and we have a table that has roughly 500 million rows -- far too much to hold in memory the primary key (which is a bigint). If we did repair table <table_name>, in 4.1 it would work fine and take several hours, but would finish OK.
In mysql 5.0, however, when we do repair table <table_name>, it will take days to finish instead of hours. It is noticeably building the unique key index at the same time as it is copying the table, row by row, which is horrendously slow compared to building it after the copy of the table.
I could not find any documentation about this change from 4.1 to 5.0. Nor did I find any good workarounds to this. We basically have two options around this if we have to go to 5.0. One is to do repairs using myisamchk, instead of repair table, which does not seem to do the slow sort, and risks us accidenally issuing a repair table command instead of a myisamchk and killing the server. The other is to get rid of unique indices on our table and programmatically do the unique key checking at runtime. This also has the advantage that any alter table statements would also go quickly in addition to the repair table (which has the same issue).
Neither of these sounds as nice as getting mysql to simply do the quicker, not row-by-row sort, when it does a repair table. (Note that it also goes incredibly slow on alter table, but there are ways to rebuild the table to work around this.) We could also assume we can always avoid doing a full repair table and simply do repair table quick whenever we have issues, but this does not sound as reliable as being able to do a full table repair.
Does anyone have any info, recommendations or examples of how to effectively get around this blatant issue with repairing large mysql tables?
Thanks,
Trenton