Re: Often corrupt table indexes
Posted by:
oliver
Date: July 20, 2005 07:40AM
Hi Ingo,
thanks for your fast reply. Regarding your questions:
- I'll try to find a way to give you some example data, since I won't be allowed to give you the original data
- The structure of auswertung_05 is stlightly different since it is just a summary of the other data collected. The structure is:
CREATE TABLE `auswertung_05` (
`LOG_TARIF` varchar(16) NOT NULL,
`LOG_VORGANG` varchar(10) NOT NULL,
`LOG_DATUM` date NOT NULL,
`LOG_TARIFINFO` varchar(10) NOT NULL,
`LOG_DRUCKAUSWAHL` varchar(12) NOT NULL,
`LOG_DRUCKSTUECK` varchar(255) NOT NULL,
`LOG_ANZAHL` bigint(11) unsigned,
PRIMARY KEY (`LOG_TARIF`,`LOG_VORGANG`,`LOG_DATUM`,`LOG_TARIFINFO`,`LOG_DRUCKAUSWAHL`,`LOG_DRUCKSTUECK`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
- Yes, we keep a table for each month
- Yes, since we get data for almost every month starting at about january 05 and we need up-to-date data in auswertung_05 we check how much data was added in a table (let's say log_05_03 for March) and if there were added more than 1.000 rows first all formerly stored rows in auswertung_05 for march are deleted and then a complete new set of aggregated data is inserted.
We already tried several ways to fix the problem like adding a FLUSH TABLES after each DELETE and/or INSERT but ended up by checking the return code of each statement and fixing the table with REPAIR TABLE xxx EXTENDED in the case of an error and then repeating the statement which let to the error condition. 8-( Usually this works for one and only one single statement, the one following thereafter will usually crash again.
Maybe we can continue this by email, to set up a way of transferring data and logfiles?
Regards, Oliver