Re: Using index makes query slower than table scan
Ah yes, good point Hubble. With Per's index, MySQL thinks the indexes identify 117,928 and 294,820 rows for the two queries. 20% of 589,640 total rows is 193,572. Therefore, it probably would be faster to scan the whole table than to do 120,000+ index reads/hard drive reads which will cause a lot of slow hard drive seeks, verses a table scan which, if the data is relatively contiguous, will result in far fewer seeks.
Therefore, the solution to Per's problem is: Make a far more unique index that the query can use. I don't know what this data represents (other than "facts" of some kind), but `year` will obviously not be very unique (unless you have facts from the beginning of time, then depending on your philosophical or theological preference, there could be a few thousand to a few billion unique years). `type` doesn't sound very unique either. In short: Your index is not unique enough and it seems MySQL doesn't realize this. Maybe if you ANALYZE your table MySQL will realize this. If not, don't use the index or re-design your query or table and then re-design the index.
Edited 1 time(s). Last edit at 10/26/2005 02:49PM by Daniel .