MySQL Forums
Forum List  »  Partitioning

Storing time series data using partitioning
Posted by: Florian Rampp
Date: August 14, 2014 05:45AM


We are currently setting up a MySQL database (version 5.6.17) on an Amazon db.m1.large RDS instance for storing time series data. Each time series property of an object is stored in a separate InnoDB table (e.g. inside temperature of building 42 is stored in table building__inside_temperature). There are about 50 time series tables and each has three columns:

- timestamp in milliseconds as bigint
- ID of object (as char(22))
- numerical value (as double)

The primary key is (timestamp, ID). We use range partitioning on the timestamp column with one partition per day for 2.5 years and key sub-partitioning on the ID with 8 sub-partitions. This results in about 800 partitions with 8 sub-partitions each per table. We use innodb_file_per_table=OFF since there would be too many open files otherwise. The other MySQL parameters are left untouched.

When connecting with 100 threads (and connections) in parallel and opening a time series table, the MySQL instance runs out of memory, probably due to this bug:
We used the following mysqlslap command for that: mysqlslap --create=create_timeseries.sql --delimiter=";" --query=insert_datapoint.sql --number-of-queries=1 --concurrency=100

Content of the files is here:

What do you think about the tables? Any suggestions about the partitioning?

Options: ReplyQuote

Written By
Storing time series data using partitioning
August 14, 2014 05:45AM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.