MySQL Forums
Forum List  »  InnoDB

Re: How to optimize very large narrow tables?
Posted by: Rick James
Date: August 28, 2008 10:27PM

If there is not a lot of "locality of reference", you are out of luck.

Count the disk hits. 100G rows will be terabytes of data/index. That will be much bigger than any RAM cache (innodb_buffer_pool_size, memcached, etc) you can muster. Therefore, it will be hitting the disk for most reads. You can't do more than about 60 disk reads/sec (ok, maybe 200 in a fat RAID). If your tree is 10 levels deep, you are down to about 20 traversals/sec. (Probably more like 100, assuming the top nodes are hit a lot, therefore well cached.

Well, there is also using slaves. 30 slaves might let you do 3000 traversals/sec.

Ok, so you decide you can keep the number of records down to stay in memory. Well, InnoDB, is not good. There is 34+ bytes of overhead per row as you have described it. MyISAM has essentially 0 bytes overhead/row, so it would be half as big, thereby letting you get twice as many rows before you spill to disk.

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: How to optimize very large narrow tables?
1766
August 28, 2008 10:27PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.