MySQL Forums
Forum List  »  Partitioning

Help: Large Data Archiving System
Posted by: Frank Osterberg
Date: June 22, 2006 01:04AM

Hi,

have the following situation, there is an environment with a lot of different data points (about 300k), the value of each data point should be stored if it changes enough and should accessible for time based analysis..

One way is to create one table for each data point and store the value and time when it exceeds a certain tolerance level from the last stored value… that works but the problem is that it’s not easy to make a quick query that finds out what a value was at a particular time… and another problem is the joining of data from different data points since they are all stored in different tables with different write times for the values…

Another problem is storage and archiving… there may be quite a large number of records, so creating partial backups and being able to efficiently remove old records is important.. one way to help this out is to use different tables for different time periods and then use a merge table to join them all… that also works.. but what if there are a large number of tables… I mean if there are around 300,000 data points (one table each) and then I split each into 52 separate tables … and all that in MyISAM .. doesn’t that mean I have about 31,800,000 files (two files per table)… and doesn’t that mean some kind of problem?

One nice thing about the nature of the data is that once a certain time period is up (i.e. week) no more data will ever be written to that time period and no changes ever need to be made on it but it still needs to be accessible for queries… good for using myisampack… but even for that, it would be optimal if some of the table files (older records) could be moved and stored on a different, large but slow partition… so that they are still accessible even if not quite as fast… something like dynamic tablespaces… is that possible in mysql?

I know about partitions in 5.1, but since it is still beta and I am having some trouble getting it to work like I want I would first like to consider other options (each of the 300,000 tables would need to be split into at least 52 partitions.. but at least then they ca use innodb, right?)

But anyway, it’s nothing too important… there is a working (well if it hasn’t crashed again) system (using sql server) that is having a lot of trouble with the above data environment.. Horrible query performance and it is even failing now to just keep up with the inserts… the entire db has to be reset every few month or so to prevent it from becoming too large…

So.. I think there has to be a better way, not sure if it is with MySQL, but it’s my first point of interest since there isn’t a huge budget involved to allow for something like a huge oracle cluster with 3 dba’s watching 24/7… the system is just supposed to archive lot’s of different data points as efficiently as possible while still allowing analysis of the data for as large amount of time as it is reasonably doable with acceptable performance.

So, I am open to any ideas, what is the best may to make this work right?

Thanks for any help!!

Options: ReplyQuote


Subject
Views
Written By
Posted
Help: Large Data Archiving System
3030
June 22, 2006 01:04AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.