MySQL Forums
Forum List  »  General

Re: Sending data in processlist
Posted by: Rick James
Date: December 30, 2014 02:55PM

How big is each table? Sounds like each is about the same size.
My guess: Each table is 5-10MB? The entire dataset is 40-100GB?
Please provide SHOW TABLE STATUS for a representative table.

> The function use parrallel process each time send 50 record to process.

When the 50 are running, do you see 50 INSERTs in SHOW PROCESSLIST all at once? Or do they come and go so fast that it does not matter? Max_used_connections was only 4 in the SHOW GLOBAL STATUS you showed me; I don't know if it is representative. If it is, then there is no problem there. That is, the INSERTs are finishing so fast that the 50 are not stumbling over each other.

> Unable to set long_query_time = 2 its shows always 10.000000

Did you set it in my.cnf? (And restart MySQL.) Some newer versions may allow you to do
SET @@global.long_query_time = 2;
to set it until the next restart, at which point it reverts to whatever is in my.cnf (or default of 10).

> In my application log the error show thread termrnated.

wait_timeout = 28800, Aborted_clients = 0, Aborted_connects = 0, so (I think) connections should not be terminated for timeout. Seeing a longer SHOW GLOBAL STATUS might come up with different clues.

Do the threads actually "disconnect"? Or do they stay connected?
* If it is coded to do 7000 connections per hour, the code should disconnect.
* If it is coded to have 50 threads running, each doing 140 INSERTs per hour, they could stay connected. This is how it is coded?
* If it is coded to have on connection doing all the INSERTs, there should be no problem.

innodb_lock_wait_timeout is 100; default is 50. Changing this variable may be an attempt to work around some deeper problem, such as the one you are describe. I see autocommit=ON; unless you are overriding that, then transactions should be hanging on no more than a fraction of a second.

Ah, another clue...
* 7000 tables
* One insert in each table per hour
* Two secondary indexes that are rather 'random': KEY `stateIdx` (`observedstate`), KEY `costIdx` (`powercost`)
* 20K rows/table
Deductions...
0.5MB/index --> 1MB for those 2 indexes --> 7GB for all copies of those indexes
That's more than the buffer_pool, so eventually (after a few hours), it will do some mild thrashing to flush index blocks (16KB each) to disk to make room for other blocks.

If you have the RAM, suggest innodb_buffer_pool_size = 8G (50% of RAM, instead of the current 12%) in order to cut back some on the thrashing.

Another approach is to shrink the indexes. The relevant fields are pk_id (4 bytes), observed_state (4 bytes), and powercost (8 bytes). By shrinking the fields, the indexes would be smaller.
MEDIUMINT UNSIGNED (3 bytes) would let pk_id get as high as 16M, a number that you won't ever reach.
observed_state has only a small number of values? Look into TINYINT UNSIGNED and ENUM -- either is only 1 byte.
powercost could perhaps be FLOAT (4 bytes), assuming 6-7 significant digits suffices.
Those changes would shrink the two indexes to perhaps under 4GB.

Options: ReplyQuote


Subject
Written By
Posted
December 19, 2014 07:55AM
December 20, 2014 12:03AM
December 20, 2014 05:33AM
December 20, 2014 10:18AM
December 26, 2014 02:23AM
December 26, 2014 11:56AM
December 29, 2014 02:59AM
December 29, 2014 01:16PM
December 30, 2014 07:02AM
Re: Sending data in processlist
December 30, 2014 02:55PM
January 01, 2015 01:55AM
January 01, 2015 08:30PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.