MySQL Forums
Forum List  »  Performance

Is this model correct?
Posted by: Levi Tedder
Date: March 10, 2010 04:00AM

Hi

I have an old customer table that originates from an old Sybase setup. The primary key is two fields, a sequence number and a varchar for owner. The table was spread around several databases, each with its own owner.
Each customer may have x number of keys/codes, and up until know we only had four or five different types of them and they were fields in the customer table (and indexed). Now we're gonna get several more, and I thought creating a separate table for them would be correct.

So, I created a new table with 4 fields, number and owner from customer table, a type field and the varchar field for key/code. I specified number, owner and type as primary key and did not index the key/code field. I've filled it with test-data and tries to execute queries, but they are so slow. If I want to search for the code type and customer name for a given key it takes about 100 seconds now (or longer). Here's the explain:

SQL result

Generated by: phpMyAdmin 2.6.4-pl1 / MySQL 5.0.21-community-nt-log
SQL query: EXPLAIN SELECT cu_name,ck_kt_no FROM cu_customer, ck_customerkeys WHERE cu_no = ck_cu_no AND cu_subsid = ck_cu_subsid AND ck_key = 'examplecode';

Rows: 2

id: 1
select_type: SIMPLE
table: ck_customerkeys
type: ALL
possible_keys: PRIMARY
key: NULL
key_len: NULL
ref: NULL
rows: 9609934
Extra: Using where

id: 1
select_type: SIMPLE
table: cu_customer
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 7
ref: ck_customerkeys.ck_cu_no,ck_customerkeys.ck_cu_subsid
rows: 1
Extra: Using where

(If I add other tables to the query it will be much, much slower)

The customer table contains about 13 million rows (6.4 gb data), and the code/key table has about 10 million rows (1.4 gb data). When I go live with this the code/key table will contain 60-70 million rows. The innodb buffer size is about 1.3gb (win 32) now. I will set up a new win 64 bit server with latest mysql and 16 GB memory and I suspect it will be a bit faster. But the current server operates fast on "all" current queries, so maybe the way I set this up is wrong. Should I create a new sequence number in customer table, index it and use that as primary (foreign) key in the customerkeys table? Or should I index the ck_key field in the current setup? When I go live, I will remove the fields from the customer table I don't need (4 or 5 with indexes).

Thanks for any input on this and sorry for the lengthy explanation...

Edit: after adding an index on ck_key the query takes 0.0006 sec. That means 3 fields are primary key (and indexed?) and the last on is an index. And the index is larger than data, and as far as I know it shouldn't be?



Edited 2 time(s). Last edit at 03/10/2010 04:18AM by Levi Tedder.

Options: ReplyQuote


Subject
Views
Written By
Posted
Is this model correct?
2177
March 10, 2010 04:00AM
1127
March 11, 2010 09:04AM
1183
March 12, 2010 04:17AM
1175
March 12, 2010 10:02AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.