Re: large InnoDB table partitioning without explicit PK
Posted by:
Rick James
Date: November 19, 2014 12:12AM
(I guess I missed this message.)
> Regarding Select_scan / Com_select - (relatively high) number of full table scans, there was a select statement...
I love working with you -- I point you in a direction; you carry the ball from there.
> My idea it to change it and perform aggregations on MySQL directly (either on a heap table / temp or on a slave or both).
Sure. The use of the MEMORY cache was designed specifically to make that work well. (BTW, my implementation did not have multiple threads, and it could batch multiple rows in a single INSERT.)
An issue with aggregation -- when doing the scan to collect the rows to summarize, it is best to have those rows consecutive. That is, they should be one of
* an entire table (such as the MEMORY cache table) or
* an entire partition or
* a range of rows based on the PRIMARY KEY (not a secondary key!)
This is because it is very efficient to do a table scan or a range scan; it is much less efficient to use a secondary key. In case you are not familiar with InnoDB's indexing scheme, I'll briefly explain it.
* A B+Tree is efficient to scan consecutive rows.
* The data and the PRIMARY KEY live together in one BTree (actually a B+Tree);
* Each secondary INDEX lives in a separate B+Tree -- it contains the secondary column(s), plus the PK's column(s);
* A secondary key lookup requires 2 BTree lookups; Thus, scanning one of your secondary keys would be quite inefficient.
By using an AUTO_INCREMENT PK (and no cache table), you have a way to summarize less than the full table (or partition) efficiently. You just remember where you "left off".
The aggregation would probably be something like
INSERT INTO agg SELECT a,b, sum(c), count(*), ... FROM xx [WHERE (range)] GROUP BY a,b;
That requires touching each original row 1 extra time, on top of the 1 INSERT that touches it now. I would guess that the aggregation might take one-fifth the CPU cost of the single-row inserts.
Sorry, but adding in aggregation will mess with your nice 23K inserts/sec. I don't really know how much it will degrade things -- to make the scan possible and/or to add an AI and/or to have an efficient cache table, etc. I would hope that you could get aggregation + 3rd party code at better than 10K inserts/sec.
Your COUNT(*) slow queries could, instead, run off some agg table.
> compressed and transferred onto another system.
Do you need replication for backup? Seems like this 'dump' could be your backup?
I rarely seen a need to push "reports" to a Slave -- given that the Summary tables are well designed.
> I quickly noticed that all available RAM was eaten up and MySQL started crashing.
Was that because of running out of memory outside MySQL? When MySQL (buffer_pool, etc) starts swapping, it feels like a crash.
Subject
Views
Written By
Posted
4509
November 12, 2014 07:41PM
2301
November 13, 2014 06:40PM
2072
November 14, 2014 03:02AM
1605
November 15, 2014 12:05AM
1645
November 16, 2014 07:39PM
1584
November 17, 2014 04:21PM
1681
November 17, 2014 07:48PM
1649
November 18, 2014 02:00AM
1714
November 18, 2014 11:35PM
1703
November 22, 2014 06:44AM
1572
November 18, 2014 01:13AM
2008
November 19, 2014 12:27AM
1721
November 19, 2014 06:01PM
1940
November 22, 2014 08:39AM
1875
November 14, 2014 03:04AM
1909
November 14, 2014 03:05AM
1921
November 15, 2014 12:32AM
1935
November 16, 2014 08:31PM
Re: large InnoDB table partitioning without explicit PK
1721
November 19, 2014 12:12AM
1633
November 22, 2014 09:07AM
1644
November 23, 2014 09:00PM
1815
December 01, 2014 11:16AM