> By using group by datum, ID I guess it sorts by
> both parameters and since ID is unique to every
> recipe it doesn't group by the same dates I have
> used in many recipes...
Exactly, that was the idea. Maybe the order for recipes with the same date now changes, but I think that doesn't matter (it was random before anyway).
> Works great, but the explain says I don't use
> index at all, but explain must be wrong, as the
> speed is 20ms. It is working (without index, it
> takes 0.11 seconds) for 10 rows. I trust the
> result not the Explain I guess...
If you talk about the "Using index" in the "Extra" column of EXPLAIN: That doesn't matter. It's the column "key" in the EXPLAIN output that shows you which (and if an) index is used (it's only bad if there is a NULL).
The "Using index" only means, that MySQL didn't have to access the actual data file at all, as all the information (all columns in the SELECT list) were already in an index. This is nice, of course, but not all that important. This is probably a bad wording used by MySQL, as almost everybody first thinks (I did so the first half year I used MySQL!), that indexes are only used if there is a "Using index" in "Extra", when it actually should read something like "Using indexes for column data instead of data file" (which was probably just a little bit too long). Second important lesson for today! ;-)
> 1. If I use INDEX instead of UNIQUE it doesn't
> change anything.
That's absolutely true. I just tend to always use the most restrictive possible index type. As there will never be a duplicate combination of these two columns, I just used UNIQUE. Maybe INDEX would even be a little better, I don't know how much overhead the UNIQUE generates on INSERTs/UPDATEs. Should maybe benchmark this one time. But it's probably almost nothing - and I'm just a little bit paranoid when it comes to data integrity. If something should be UNIQUE I always tell the server to check that. ;-)
> 2. I don't use index on both columns on picture
> table only on ID and it still works OK.
Yep, in that simple query. It's just there in case you might use it in additional parts (further JOIN, further WHERE restrictions) and I also thought MySQL could maybe make use of it to optimize the MAX(): It could just step to the next value of the first index column and then go one row back to find the MAX(). I know it does this in some cases - don't know if it does it here. We would have to check this (I think we don't really see this in EXPLAIN). You can skip it or keep it - as you want.
> My list is in such a manner that visitor can click
> on the different columns (author etc.) and it gets
> sorted on another column. I guess this "trick" of
> yours works only on the datum DESC, ID desc as you
> wrote. If visitor will order by another field it
> will take longer and there is nothing I can do
> about it I guess (can't have 10 indexes).
Hm, yep, you'd have to decide on the most important ones and maybe deactivate the others completely or just accept that the seldom used columns are much slower for you users.
> Someone I know suggested me to make this list into
> a file and read the file when visitor comes to the
> page. I am not sure if reading file is really
> faster than database search. He tells me reading
> from database if the data don't change in real
> time (I put new recipes on only once daily) is a
> vaste of resources.
It will probably be much faster. Especially if you store not only the simple list, but all the pre-generated HTML. Like this you will not only save the load on the database but also on your PHP, Perl, ASP, Ruby, Python or whatever-else-you-use scripts. This would be a good way to deal with really heavy loads, especially if your site is only updated once a day.
Think of this: The webserver would just have to retrieve a file and send it back! This happens in an instant, especially if that file is already cached in RAM by the webserver or the operating system. You can also just cache your most used pages (maybe front page) like this. Of course you'd have to deal with updates somewhere in your scripts and update the cache accordingly.
That's actually how a lot of really big sites work: Cache as much as possible closest to the front (and then maybe on different later levels), only let the really necessary (dynamic) stuff get through all the way to the database. And then of course: Still try to optimize all your queries and make use of such features as the MySQL query cache (if it makes sense, it doesn't for all applications).
Cheers,
Beat
Beat Vontobel
http://www.futhark.ch/mysql