MySQL Forums
Forum List  »  Optimizer & Parser

Re: At Performance Limit for 2 GB RAM?
Posted by: Jay Pipes
Date: August 04, 2005 07:17PM

Nick Matheson wrote:
> Jay-
>
> The metadatatemp table is required because it
> allows selection of a subset of a given stations
> reports from the y_statsinfo table.
>
> The main metadata table consists of date, offset,
> region and other information. I then select a
> group of metadata_ids based on various parameters
> (putting the results in the y_metadatatemp table)
> and then retrieve the corresponding data from the
> y_statsinfo table which has one entry for every
> stn_id and metadata_id pairing.

Cool. Thanks for clearing that up. It wasn't all that clear to me from the schema...

> I have baselined some simple queries to give me
> upper bounds and I am trying to understand if 1
> sec for 5K and 5 sec for 50K rows is two low or is
> it simply the query I have here.

No, for such a small query, the response times across an eq_ref relation should be almost instantaneous; however, that doesn't mean that your network and client will be able to process 50K records instantly. That, as I alluded to in an earlier post, may also be an issue. However, it sounds like the width of each record isn't that big, so I don't think that either query should return in more than a second...

On a side note, if proper indexing is in place, queries will not take a linear growth pattern in processing time; that is, if you double the records, it won't double the query time...*if* indexes are current and on the appropriate columns.

> For example:
>
> 1. Generating a sum on a key value over a table
> with a million rows based on a keyed index range I
> can process 800,000 rows per second. (This trend
> is linear from 1 row selected through the entire
> data set.)

I think you mean it is constant, not linear, correct? You mean that the query seems to take the same amount of time, regardless of the number of records.

> 2. For a join of two tables of a million rows each
> and summing a value over a subset of the results
> based on a keyed index range I can process 125,000
> rows per second. (Again a linear trend.)
> 3. For both of the above tests I also ran tests on
> smaller tables sizes and it show that increasing
> table size had no effect on the result time, only
> the number of rows returned.
>
> But I am only getting less than 5,000-10,000 rows
> per second on my test tables of 20 and 200 million
> rows.

Have you considered increasing the key_buffer_size? Perhaps what is occurring is that MyISAM is thrashing disk and memory resources because it cannot fit all the index blocks for the 20 million rows in the table, and therefore is releasing blocks from the key cache in an LRU strategy in order to make room for the blocks needed by the query. If this is happening, then this can seriously impact performance, as MySQL is working double-time to do the key searches.

Jay Pipes
Community Relations Manager, North America, MySQL Inc.

Got Cluster? http://www.mysql.com/cluster
Personal: http://jpipes.com

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: At Performance Limit for 2 GB RAM?
3333
August 04, 2005 07:17PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.