MySQL Forums
Forum List  »  InnoDB

Re: Larger than expected tmp files querying InnoDB format database
Posted by: Rick James
Date: August 27, 2014 02:27PM

> subsequent runs return the result set instantly

It sounds like the "Query cache" is kicking in.
Try the select with
SELECT SQL_NO_CACHE ...
That will bypass the QC for performance testing.

> Current max_heap_table_size = 16 M
> Current tmp_table_size = 16 M
> leaving 2GB of RAM snoozing

That limits tmp tables in RAM to 16M. Caution: it is unwise to crank those numbers up to 2G -- if multiple connections need 2G, you could quickly run out of RAM. Furthermore because some of the fields (eg, `cancel_reason`) are TEXT, the optimizer will not even try to use RAM.

To summarize the direction I have taken...
* The tmp table cannot (and/or should not) be moved to RAM, unless it can be made significantly smaller.
* Redesigning the SELECT (to shrink the 2GB) would speed up the query, even if it does not get rid of the tmp table. I suspect speed is the real issue, and the tmp table is merely a clue.
* 2GB tmp file may be shrinkable changing datatypes: "VARCHAR(255) fields are turned into CHAR(255)", so use less than 255. Also avoid utf8 (3x) when practical.
* Postponing JOINs (as mentioned) may decrease the 2GB.
* Increasing RAM and innodb_buffer_pool_size will improve speed, but won't eliminate the need for a tmp table; so we need to focus on shrinking the tmp table.

> I would have thought MySQL should exhaust memory available in its pool of resources before swapping to disk, no?

It does not work that way. MySQL has a dozen caches, each with its own limit. (buffer_pool, query_cache, table_open_cache, max_connections, etc.) These fixed limits mostly keep MySQL at a fixed amount of memory. That, indirectly, prevents swapping. There are a few wrinkles for which extra RAM needs to be left free. In particular: when you have multiple connections each perform complex selects each needing one or more temp table. If those temp tables are put in RAM, you could run out of RAM. The controls (max_connections and tmp_table_size) are not sufficient to put a limit on total RAM consumed -- because a single select can require multiple tmp tables.

> execute queries utilizing fast RAM, completing them faster, so that the RAM is available sooner for next queries.

I'm afraid there is no config for such, rather there is the arduous task of improving indexes, datatypes, joins, normalization (or denormalization), using (or un-using) subqueries, etc. I have pointed out many of these tasks.

> The contribution table has just over 94,000 records in it. The contacts table has just under 18,000 records in it.

I presented 4 queries (with subtle differences) -- I was hoping for 4 numbers, with some of them being the same. The differences/samenesses may be important clues on where to go next.

> seeking some silver bullet of optimization possibly via subquery

Many questions on this forum have a simple and obvious silver bullet. Alas, your case is more complex, hence is taking a longer time to resolve. This is an example of why I generally dislike 3rd party software -- They should have done there homework on optimizing this query, not leaving it to all their customers.

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: Larger than expected tmp files querying InnoDB format database
1117
August 27, 2014 02:27PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.