Re: Queries returning large number of rows
Anthony Willard wrote:
> Is your "like 'whatever%'" clause large enough?
> Using too few characters may make the query engine
> decide to ignore the index.
>
> For instance, 'abc%' is more generic the
> 'abcdefgh%'. If your data has many rows beginning
> with 'abc', the optimizer may not choose the index
> over a table scan.
Well, I wondered about that. EXPLAIN SELECT... does indicate, though, that the key will be used. I am assuming that is reliable, but I don't know whether it is or not.
>
> > Any index you create (even one with maximum
> cardinality) won't help much if you ask it to
> process millions of records. If all of your data
> is available in your index, the scan takes place
> against the index instead of the table itself, but
> it's still a scan.
With MyISAM tables I'm definitely seeing a huge performance increase when the table is small enough for the indexes to fit into the memory specified in 'key_buffer_size', which suggests it's able to use the index without having to go to the rows themselves.
>
> With that in mind, you might try reducing the
> prefixes in your index to increase the key
> density. (Do you really need 62 bytes if you use
> fewer than that?)
Actually, it's a composite index - only the first 31 bytes are used for selects against the 'current_entry' column alone, all of which are actually necessary. The other half is for selects against both columns.
>
> I hope this helps.
It does, thanks!