MySQL Forums :: Performance :: Get statistics using self joins

Advanced Search

Get statistics using self joins
Posted by: Harutyun Drnoyan ()
Date: May 16, 2009 10:40AM

Hello everyone.

I'm now working on a banner software which allows admins to see full statistics of banner displays, clicks and other information in selected time interval by minutes, hours, days etc.
I want to consult any experts who have experience using large tables with JOIN queries and optiomize my code as much as possible. Also everyone is welcome to suggest different approaches to complete such task.

I decided to store all data in mysql using MyISAM table engine.

Table created with query

`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`bannerid` int(11) NOT NULL,
`url` text NOT NULL,
`dateline` int(11) NOT NULL,
`ip` char(255) NOT NULL,
`user_agent` char(255) NOT NULL,
`platform` char(255) NOT NULL,
KEY `bannerid` (`bannerid`),
KEY `dateline` (`dateline`)

I have decided to add indexes to columns bannerid and dateline as they are heavily used in all queries getting statistics data.

I found that the best way to get data from the table for different time periods is self joining strategy. So I'm generating SQL in php with as much joins as it's necessary to get all required data.

For example, query to get data from 13 May 2009 to 16 May 2009 will be

FROM sample_stats AS stat
LEFT JOIN sample_stats AS stat0 ON (
LEFT JOIN sample_stats AS stat1 ON (
LEFT JOIN sample_stats AS stat2 ON (
WHERE MIN(id) AS id FROM sample_stats)

This code on table with ~26 000 000 rows and 10 000 rows returned for each JOIN takes 50-60 minutes to complete.

EXPLAIN shows that all joins are using keys and range scan for dateline field.

Profiling shows that founding rows and all other operations getting done quite quick and the main time takes the stage "Sending data". So i tried to remove any selected data (COUNT(DISTINCT ...) from query and in that case query takes milliseconds even with tens of joins (maximum is 61 as I can see from my experience).

The sub-query

WHERE MIN(id) AS id FROM sample_stats)

added to limit the number of scaned rows in stats table (main table in from clause) to 1, otherwise it would scan all 26 000 000 rows in this table and join to all rows. That greatly improved performance of this query. As you can see I really do not need any data from table stats, it is only used as a fake table to join over tables to it and get rows for date intervals.

So this is all I am doing now and it works quite well for not such huge tables and small count of joins (intervals) and here my questions not so clear for me so far

1. How "Sending data" phrase of query can be improved. It seems that COUNT(DISTINCT ..) is too slow to get count of unique matched rows. I have read about filtered insert (using temporary table and insert into data using unique key and getting row count in that table) but I don't now enough about it to test and see if it's fasted than COUNT function on indexed column. Any ideas?

2. How else I can force engine not to scan main table without using subquery. Or may be any way to use fake (unreal) table instead if table with millions of rows?

3. At last is this really good to use multiple joins instead of separate simple queries for each time interval. So far it seems that with small count of rows (tens of thousands) joining is much quicker VS separate queries. But with growing number of rows joining gets slower and slower (but as I said founding rows for joins still is quite quick, but returning data is too slow). So may be any check for row count before deciding how much joins can be used?

4. I tried to benchmark my queries using timers in my PHP script but it seems that mysql heavily caches queries. The same query run once again takes only ~0.2 seconds. Tried to disable cache using


and changing my data (inserting and deleting rows) but the query is still to fast. As mentioned in MySQL documentation query cache is used only for exact the same query and only if table is not updated. But even changing query to something like


did not take much that ~0.3 seconds to complete. So can any one suggest any optimal way to benchmark queries.

5. I'm adding keys in table creation query and inserting random data in it using PHP. phpMyAdmin shows that a lot of table data is in indexes(~25%) and from this I assume that index data is updated on every INSERT query and there is no need to do additional ANALYZE TABLE to populate index data. Is it right or something happens to index data during first query which is the main reason that over queries run much faster than first one?

6. Searched a lot connected with
but didn't realized is their usage performance gain or bottleneck?

That's all so far. Any suggestions are highly appreciated. Any questions connected with this topic are welcome.


Options: ReplyQuote

Subject Views Written By Posted
Get statistics using self joins 4770 Harutyun Drnoyan 05/16/2009 10:40AM
Re: Get statistics using self joins 1704 Rick James 05/16/2009 07:42PM
Re: Get statistics using self joins 1904 Harutyun Drnoyan 05/17/2009 11:41AM
Re: Get statistics using self joins 1749 Rick James 05/17/2009 12:32PM
Re: Get statistics using self joins 1765 Harutyun Drnoyan 05/17/2009 12:48PM
Re: Get statistics using self joins 1796 Harutyun Drnoyan 05/18/2009 03:24AM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.