Re: Big table (too big?) how to handle?
Posted by: Peter Brawley
Date: March 23, 2015 01:14PM

"Doing anything that locks those tables takes forever" suggests you're using MyISAM. If the tables are not InnoDB, convert!

A well designed table with 65M rows and adequate indexing should query pretty comfortably. You wouldn't want to put that much data on a shared hosting server. You'd need a dedicated server where you can choose RAM, disk space and CPU power for fast imports & queries.

Then tune the server and the tables, see http://mysql.rjweb.org/doc.php/memory, http://mysql.rjweb.org/doc.php/ricksrots#indexing

Typically, database files have lots of air, so they compress a lot. If each of the 65M rows has an ID and 28 floats, the table might be 9-10GB, compressible to 1-2GB, and that could be painlessly ftp-ed to a hosting server, then uncompressed and imported.

That said, if nine tenths of the data can be done without for most purposes, you could install an archiving process that leaves orig data intact but does 10% sampling for current query access.

Options: ReplyQuote


Subject
Written By
Posted
Re: Big table (too big?) how to handle?
March 23, 2015 01:14PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.