MySQL Forums
Forum List  »  Informix

Sequential mysql reads
Posted by: Christopher Rothery
Date: September 06, 2006 10:22AM

Hi there, hopefully someone can help me!

The company I work for has an application that uses Informix C-Isam as it's database. We don't do anything too spectacular with it, we read, read next, update, add etc. If we want anything that isn't in key order we hunt for the answers in the most appropriate key and filter out the chaff. All of our access is through our own library functions that pretty much map to C-Isam funtions/options and the code of the application is what I'd call 'large'! (> 4 million lines).

For the last couple of weeks I've been trying to take baby steps towards moving to mysql as our database by mimicing what our current functions do on a dump of the C-Isam database. The logic is that if we can mimic what those functions do accurately and without too much of a performance hit, we can live like that and newer parts of the application can be 'mysql aware' and start to take advantage of the non-sequential reads, table joins etc that mysql gives us.

I've been reasonably successful in the mimicing but not the performance. Because this has to be backwardly compatible, I'm basically issuing a mysql select for each read we do (not worrying about updates yet). I've experimented with caches (ask for one record, get it and the next 10 back so that as we loop through we can use the cache rather than read afresh). Within reason this would be ok (we'd choose when it's safe to use the cache and when not).

What's troubling me (apart from the overall dread from the whole project) is the performance of these mysql reads. It's complicated but best described with a bit of pseudo code:

existing logic:

initialise record key
if (can read a record based on initial key) {
do {
print record details
} while (can read next record)
}

With our existing code, this runs through my data set in about 3 seconds.
If I augment that and try and read the parallel mysql record for the cisam record we have it jumps to 18 seconds (basically just adding 13000 mysql_query + associated consumption of the results).

To read the mysql record, I am opening a connection to the db outside of the loop, making a select statement inside the loop (timings for this alone are tiny), calling mysql_query (haven't mentioned it but we're using c and linking the mysql library to our program), and then fetching/freeing the results.

As the record I'm reading is changing each time, the statement is changing each time (although the layout of the statement is the same, the data parts of the 'where' change). I'm going to try and experiment with creating a paramaterised procedure that can accept the key field values to see if this will help at all.

Long intro, now the questions:

- Has anyone tried anything vaguely similar?
- Is what I'm trying feasable (make a relational database pretend to be a sequential one with only within-reason performance loss)
- Could I do what I'm trying to do better or shall I just give up?

I know this was long and if you made it this far I salute you. If you manage to come up with any answers/queries I'll be even happier.

Thanks

Chris Rothery
Cognition Solutions plc.
Cheltenham
UK

Options: ReplyQuote


Subject
Views
Written By
Posted
Sequential mysql reads
13928
September 06, 2006 10:22AM
7882
September 16, 2006 08:41AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.