MySQL Forums
Forum List  »  General

Re: trillion records?
Posted by: Shawn Taylor
Date: August 09, 2011 01:01PM

Rick,

my math might be off here:

(1 000 000 000 000 * 100) / (1024 * 1024 *1024 * 1024) or

100 000 000 000 000 / 1 099 511 627 776 read

one hundren trillion bytes divided by 1 terrabyte = ~90 TB

You'd be hard pressed to get that in a single server unless you look at something like:

http://www-03.ibm.com/systems/storage/disk/ds3500/

which can be stacked and directly attached. This however creates a single point of failure in your solution and the above is only a the storage component.

It does however offer 4x the disk.

I don't have the experience with large data sets you do so I am not sure that translates to 1/4th the time references above but 1/4th of 30 years still doesn't work, so in my opinion you really need a SAN here do you not?

Unless the summary tables do reduce the data set by 10x??

It's always curious to me that your HW point of reference is a relatively small footprint box. If you had 10 of them in a sharding scenario or using spider (I have some research to do here) it certainly eliminates my 'single point of failure' problem. However a cluster riding on a SAN also does.

Since the solution I am proposing is the *only* way I know to solve this problem I am eager to understand a more affordable approach to this type of scalability as the SAN/Cluster solution doesn't come cheap.

Thanks,

Shawn

Options: ReplyQuote


Subject
Written By
Posted
August 05, 2011 01:18AM
August 05, 2011 07:51AM
August 07, 2011 02:25AM
August 07, 2011 05:58PM
September 06, 2011 11:06AM
September 07, 2011 09:18AM
Re: trillion records?
August 09, 2011 01:01PM
August 09, 2011 07:57PM
September 06, 2011 10:44AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.