MySQL Forums
Forum List  »  Partitioning

Re: large InnoDB table partitioning without explicit PK
Posted by: Rick James
Date: November 19, 2014 12:12AM

(I guess I missed this message.)

> Regarding Select_scan / Com_select - (relatively high) number of full table scans, there was a select statement...

I love working with you -- I point you in a direction; you carry the ball from there.

> My idea it to change it and perform aggregations on MySQL directly (either on a heap table / temp or on a slave or both).

Sure. The use of the MEMORY cache was designed specifically to make that work well. (BTW, my implementation did not have multiple threads, and it could batch multiple rows in a single INSERT.)

An issue with aggregation -- when doing the scan to collect the rows to summarize, it is best to have those rows consecutive. That is, they should be one of
* an entire table (such as the MEMORY cache table) or
* an entire partition or
* a range of rows based on the PRIMARY KEY (not a secondary key!)

This is because it is very efficient to do a table scan or a range scan; it is much less efficient to use a secondary key. In case you are not familiar with InnoDB's indexing scheme, I'll briefly explain it.
* A B+Tree is efficient to scan consecutive rows.
* The data and the PRIMARY KEY live together in one BTree (actually a B+Tree);
* Each secondary INDEX lives in a separate B+Tree -- it contains the secondary column(s), plus the PK's column(s);
* A secondary key lookup requires 2 BTree lookups; Thus, scanning one of your secondary keys would be quite inefficient.

By using an AUTO_INCREMENT PK (and no cache table), you have a way to summarize less than the full table (or partition) efficiently. You just remember where you "left off".

The aggregation would probably be something like
INSERT INTO agg SELECT a,b, sum(c), count(*), ... FROM xx [WHERE (range)] GROUP BY a,b;
That requires touching each original row 1 extra time, on top of the 1 INSERT that touches it now. I would guess that the aggregation might take one-fifth the CPU cost of the single-row inserts.

Sorry, but adding in aggregation will mess with your nice 23K inserts/sec. I don't really know how much it will degrade things -- to make the scan possible and/or to add an AI and/or to have an efficient cache table, etc. I would hope that you could get aggregation + 3rd party code at better than 10K inserts/sec.

Your COUNT(*) slow queries could, instead, run off some agg table.

> compressed and transferred onto another system.

Do you need replication for backup? Seems like this 'dump' could be your backup?
I rarely seen a need to push "reports" to a Slave -- given that the Summary tables are well designed.

> I quickly noticed that all available RAM was eaten up and MySQL started crashing.

Was that because of running out of memory outside MySQL? When MySQL (buffer_pool, etc) starts swapping, it feels like a crash.

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: large InnoDB table partitioning without explicit PK
1754
November 19, 2014 12:12AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.