Re: Memory usage increasing constantly
Posted by: Rick James
Date: June 11, 2014 12:56PM

> hitting on different partitions (each parallel query hits on a single partition).

But did you verify, via EXPLAIN PARTITIONS SELECT, that it was really hitting only one partition?

> but some (around 60) of these queries may be executed in parallel threads

If SHOW PROCESSLIST shows 60 of them running simultaneously, then mysql is probably stumbling over itself. There are the threads coming from? Apache? Tomcat? Other? When they pile up, each thread runs slower, thereby taking longer and aggravating the situation. Sometimes it is better to limit (at the application side) the number of possible threads, thereby allowing each thread to finish faster. The throughput may actually improve after such a throttling.

> When I give these queries, the RAM jumps up of 3 GB.
> using 2000 partitions
> table_open_cache: it is exactly 2000

If there are 60 threads, each opening all opening 2000 partitions, and if that really needs to be 60*2000 "tables" open, then the cache will be thrashing terribly. (There are a lot of IFs in my statement.)

> using 2000 partitions
> table_open_cache: it is exactly 2000

Suggest you have the cache a little more than 2000 in the case where "60*" is not relevant. There are other things that might need to be opened here.

> Monitor memory usage versus Threads_connected

Try monitoring memory usage versus Threads_running.

Calculate these:
Opened_files/Uptime
Opened_tables/Uptime

If you have 5.6.6 or later, table_open_cache_instances could be set to, say, 4 to speed things up a little.

I keep going back to throttling and speed because speeding things up will lessen the contention, hence the spikes in RAM usage. (Yeah, it does not solve the problem, rather it avoids it.)

> indexes: I agree that some indexes may speedup a little bit, but I'd like to avoid adding more indexes on such a big table.

I understand your reluctance. However INDEX(CId, StartTime) would work much better for that naughty query. It can essentially do all of this inside the index, touching only one row in the index and one row in the data:
WHERE (CId=1199) AND (StartTime BETWEEN <t1> AND <t2>) ORDER BY StartTime DESC LIMIT 0,1
With the fields reversed (as you have it), it must scan lots of rows in the index. I think it (1) scans _all_ the rows in the index in the StartTime range, filtering on CId as it goes, (2) sorts the result, (3) delivers one row. It probably fetches data in (1), thereby needing to sort many rows in (2).

For Linux, thread_cache_size should be a small, but non-zero, value. "1" is likely to be too small; "50" is likely to be too large. (However, since "1" did not seem to affect things overall, this is not a big deal.)

> - is it expected that all this RAM is allocated? Maybe I'm wrong, but it looks like all my big table get cached in memory, exceeding innodb_buffer_pool.

I don't have all the answers, but here are some principles:
* The value of innodb_buffer_pool_size is a limit on memory allocated for it.
* The buffer_pool is a "cache", roughly LRU organized. So, table(s) bigger than it are cached at the "block". (An InnoDB block is 16KB.)
* The buffer_pool is _usually_ the main component of MySQL's memory usage.
* table_open_cache (and some similarly named caches) limit the number of entries for table info, etc. I don't know how much RAM is used by each. Say it is 1KB; then 2000 entries would be only 2MB -- no where near the 3GB you are experiencing.
* A "connection" takes a chunk of RAM. See Max_used_connections (high-water) and Threads_connected (current count). The space is for each, I think, thread_stack_size=262144. 60 connections --> 15MB; again << 3GB.

Ah... Let's look back at the naughty query. Note that there is a "sort". That involves either a temporary, implicit, MEMORY table (preferred) or a MyISAM table (if too big or TEXT/BLOB involved, etc). "Too big" occurs when the temp table becomes bigger than
min(max_heap_table_size, tmp_table_size)
which is 10M in you case. 60 * 10MB = 600MB, assuming all 60 had a MEMORY table open at the same time. 3GB / 10MB = 300; is there a chance that 300 threads were running? Maybe the query needs more than one tmp table.

I think the whole 10MB is allocated in RAM as soon as the tmp table is needed, but I am not sure.

See
http://dev.mysql.com/doc/refman/5.5/en/internal-temporary-tables.html

Bottom line: Try tmp_table_size = 1M. See if (1) the spike drops to 300MB, and (2) if performance does not change significantly.

Options: ReplyQuote




Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.