Re: MySQL 5.1.49 - Optimal settings for memory usage
Rick James Wrote:
-------------------------------------------------------
> > SELECT Win.time,
> UNIX_TIMESTAMP(Win.time) AS unixtime,
> YEAR(Win.time) AS jahr, MONTH(Win.time) AS
> monat,
> DAY(Win.time) AS tag, HOUR(Win.time) DIV 12
> AS intervall,
> ROUND(AVG(SIN(Win.windricht/180*PI())), 4)
> AS avg_sin_windricht,
> ROUND(AVG(COS(Win.windricht/180*PI())), 4)
> AS avg_cos_windricht,
> ROUND(AVG(Win.windgeschw), 4) AS
> avg_windgeschw,
>
> ROUND(AVG(-ABS(Win.windgeschw)*SIN(Win.windricht/1
> 80*PI())), 4) AS avg_uwind,
>
> ROUND(AVG(-ABS(Win.windgeschw)*COS(Win.windricht/1
> 80*PI())), 4) AS avg_vwind
> FROM MIUB_messdaten.Wind AS Win
> WHERE Win.time>='2011-01-17 07:54:00'
> AND Win.time<'2012-01-17 07:54:00'
> AND Win.Geraete_idGeraete=2
> GROUP BY jahr,monat,tag,intervall
> ORDER BY Win.time;
>
> Needs an index on (Geraete_idGeraete, time) in
> that order. Since it did not use
> INDEX(Geraete_idGeraete), I have to assume that
> many rows have Geraete_idGeraete=2. Is that the
> case?
Yes Rick, each row has that - it is the identification of the wind measuring device and it is a foreign key to the measuring device table
Each entry looks like that:
SELECT * FROM `Wind` WHERE `time` >= '2012-01-06 09:00:00' LIMIT 0 , 1;
+---------------------+------------+-------------------+-----------+
| time | windgeschw | Geraete_idGeraete | windricht |
+---------------------+------------+-------------------+-----------+
| 2012-01-06 09:00:00 | 3.682 | 2 | 335.6 |
>
> Because this is InnoDB, INDEX(Geraete_idGeraete)
> has `time` implicitly tacked on the end (so that
> it can find the PRIMARY KEY).
>
> Swapping the keys to
> PRIMARY KEY(Geraete_idGeraete, time),
> INDEX(time)
> would take some time, and may or may not speed up
> that query.
>
> It would help to have the ORDER BY be the same as
> the GROUP BY.
>
> If you don't really need jahr, monat, etc, then
> you could work with
> FLOOR(UNIX_TIMESTAMP(`time`) / (12*60*60))
> except there might be a timezone issue.
>
We need that, because with create a www-plot with the average of collected data and a human readable timeline at the bottom.
> The other approach is to create a "Summary table"
> that pre-calculates these values. Then use that
> table for this query. The new table would be
> updated every 12 hours with a new row. The
> resulting query would run probably more than 100x
> faster.
Thanks for the good explanation, I will think about the best way to optimize the way of indexing.
While going throught and deeper into this, I found another strange thing, following in the next message.
Regards Marc