MySQL Forums
Forum List  »  Optimizer & Parser

Re: how to optimise query for 10+ millions records
Posted by: Rick James
Date: February 15, 2012 07:30AM

> I am unable to resolve the problem. I need it urgent because my site is live now and it has made it slow. I am very nervous where the problem is.
Sorry, but there are many problems...

Big red flag:
Instead, use DATETIME or TIMESTAMP. It will simplify a lot of queries.
(DATE: 3 bytes; TIME: 2; DATETIME: 8; TIMESTAMP: 4)

> `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
You said 10 million; you did not say "more than 4 billion". Use INT UNSIGNED unless you might go past 4 billion. (Savings: 4 bytes in the table, plus 28 bytes in the indexes -- each InnoDB secondary index contains a copy of the PRIMARY KEY)

> varchar(100)
Shrink that where safe. Certain temp operations will actually allocate a full 100 characters.

Here's an example of where splitting DATE and TIME causes trouble:
> WHERE start_date BETWEEN '2012-01-27' AND '2012-01-28'
> AND Start_Time BETWEEN '00:00:00' AND '23:59:59'
It will see that the first field is a range, and it won't get to the second field, much less the fields after it.

When designing an index, but the range field _last_.

> ...2, ...3
No. Build another table with things like this. You seem to have 9 fields repeated 3 times each. Put them in another table, together with the id back into this table.

> KEY `indx_StartDateTime` (`START_DATE`,`START_TIME`),
> KEY `indx_DateTimeDestination` (`START_DATE`,`START_TIME`,`DestinationID`),
The first key is redundant since it is a prefix of the second. DROP the first.

> `CountryID` int(11) DEFAULT '0',
There are standard 2-letter country codes; suggest using CHAR(2) instead of a 4-byte int.

"IN (SELECT ...)" -- This optimizes poorly; turn it into a JOIN.

> My Comp Ram is 2 GB
Oh, that is tiny by today's standards. I hope you have innodb_buffer_pool_size set to about 600M, maybe less if your entire stack is on the one machine.

> AND destination.DestinationName LIKE '%%'
Since you are probably building the WHERE clause, simply do not add this onto it. (Maybe that is what you are doing with the comments?)

> GROUP BY axecdr.START_DATE, axecdr.DestinationID
OK, this will get messier once you change to a single field for date+time. BUt it is very much worth it! Here's two ways to do achieve the group by:
GROUP BY LEFT(axecdr.START_DATETIME, 10), axecdr.DestinationID
GROUP BY DATE(axecdr.START_DATETIME), axecdr.DestinationID

> This comma separated string is then converted to temporary tables.
Ouch. Instead, construct an IN list. Be sure to also validate the input!! Hacker would love to get into your db and mess with things.

Fix most of those, then come back for more advice; I doubt if we are through with the issues.

Options: ReplyQuote

Written By
Re: how to optimise query for 10+ millions records
February 15, 2012 07:30AM

Sorry, you can't reply to this topic. It has been closed.
This forum is currently read only. You can not log in or make any changes. This is a temporary situation.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.