MySQL Forums
Forum List  »  InnoDB

Re: How max Innodb table data size will impact usage performance
Posted by: Rick James
Date: March 05, 2016 12:11AM

I can write a query on a 1000-row table that will take hours.

I can write a query on a billion-row table that is 'guaranteed' to take less than 100ms.

In other words, there are too many variations of schema, queries, indexes, etc, to answer your question as asked.

I don't see many billion-row tables, but they are usually designed to perform "well enough".

If you want to talk specifics, let's see a tentative SHOW CREATE TABLE. We can probably point out _likely_ flaws if it were to have 100M rows.

Example 1:
No indexes on the table.
If you don't know why that is a red flag, go back to Databases 101.

Example 2:
id CHAR(36) PRIMARY KEY
would raise a big reg flag in my mind. Why? Because UUIDs (GUIDs) are 36 characters and _very_ random. Very random indexes perform poorly at scale.

Example 3:
Single-column index on every column.
A common mistake of novices. It hits the same problem as Example 2.

Example 4:
No compound (composite) indexes.
Another indicator of a Novice.

Example 5:
Lots of BIGINT columns.
Do you really need to waste the space (8 bytes vs 4)? Will you really be counting higher than 4 billion?

Example 6:
Not using ENUM or TINYINT for flags and/or indexing flags.

Example 7:
Not having an explicit PRIMARY KEY on an InnoDB table.

Etc.

About 9% of the tables discussed on these forums are bigger than 10 million rows. Yes, many have encountered performance problems, but many of the problems can be fixed.

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: How max Innodb table data size will impact usage performance
729
March 05, 2016 12:11AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.