MySQL Forums
Forum List  »  Performance

Quick tips on DB opt?
Posted by: Brandon DuPree
Date: November 28, 2010 12:30PM

Hey guys, I whouldn't normally ask questions like this without doing my research first, but I'm late for work and am hoping to have a point of reference when I get home.

The Issue:
Working with the main table on one of my dbs is ridiculously slow, and I need to speed it up significantly as the time it takes to parse the chunks of data i receive and place it in the db is over 5 min, which is the interval at which I receive said chunks. This overlap is causing updating issues, and completely invalidating the data.

Data info:
I'm receiving pricing information and other numerical data from a game market with roughly 6k new (often duplicate) rows submitted every 5 min on the dot.

DB info:
It have 11 cols with an average of 2m rows. all of these cols are numerical, and I have a hunch that the fact that they're set to Varchar is part of my issue, but I get errors when changing to int, probably because ' ' signifies a string when inserting data?
I'd like to purge as little data as possible in order to retain speed, but that may not be possible, since the potential size of a month long backlog of data would be something like 15m rows, and at the curve of speed loss with the addition of new rows, I don't see that happening.

My attempts to fix so far:
I've set the sole col that gets queried to primary key, and tried both with and without a full table index, with little change.
I've tried opt table and repair table, and they did nothing.

Server Info:
Not sure on the exact specs, but I know I have over 4g of ram allocated to mysql and a couple of very nice processors. The buddy of mine hosting this said that the one box I'm on runs about $15k+, if that helps.

Query Example:
I use this for every line of data I receive, and little else (I'm a sql noob, please don't flame if this is idiotic, lol)

REPLACE INTO dump (price,volRemaining,typeID,orderID,bid,issued,stationID,regionID,received,lastVolume) VALUES('912498.52','33.0','28209','1772753590','False','1289377775','60004588','10000030','1290966611','33.0')

A basic select * query is done on about 70* of these as well.

Now, I'm not looking for someone to fix this for me, but any info on what factors contribute to the speed in this situation would be very helpful. The script that handles the incoming data has quite a bit more to it than that one line, but I've tested, and with a handfull of rows, each 6k+ row chunk of data i receive is processed in 5 secs, and after about 80k rows the time climbs to over 5 min.

I'm perfectly happy with recoding the entire mysql portion of the script, so any solution for max optimization is acceptable.


Thank you very much in advance for any advice, links, solutions, ect.



Edited 1 time(s). Last edit at 11/28/2010 12:34PM by Brandon DuPree.

Options: ReplyQuote


Subject
Views
Written By
Posted
Quick tips on DB opt?
2328
November 28, 2010 12:30PM
1007
November 29, 2010 09:12PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.