Rick James Wrote:
-------------------------------------------------------
> You still have not shown
> SHOW TABLE STATUS LIKE 'item'\G
> The size of item could indicate that you don't
> have enough ram to cache everything, which would
> imply that you are hitting the disk more than you
> would like.
ok:
mysql> SHOW TABLE STATUS LIKE 'item';
+------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+-------------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+-------------------+----------+----------------+---------+
| item | MyISAM | 10 | Dynamic | 16463 | 335 | 5525368 | 281474976710655 | 3458048 | 0 | NULL | 2009-04-30 19:59:00 | 2009-04-30 19:59:03 | 2009-04-30 19:59:02 | cp1251_general_ci | NULL | | |
+------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+-------------------+----------+----------------+---------+
>
> You have now presented three different flavors of
> the SELECT in question -- the ORDER BY changed;
> this could be significant. :(
>
> Have you tried removing DISTINCT? It is probably
> redundant.
>
Yes, may be you can find the solution:
Category_items table can have the same item in 2 - 5 different categories so if i'll remove this distinct command i'll see some items dublicated..
> 7000 rows returned -- The EXPLAIN implies that it
> is hitting 3 tables, ending with 7000 hits on the
> last table (item). This would be 7000 probes into
> the index (probably pretty well cached in the
> key_buffer), plus 7000 probes into the data (can't
> tell how well cached).
>
> At worst, I would expect on the order of 7K disk
> hits, which could be roughly 7 seconds. Since
> your timing is better than that, I would guess
> that much of the data is cached.
>
> So, what it is doing?
> 1. Hit the first two tables (pretty efficiently),
> 2. Randomly(?) fetch 7000 rows from item
> 3. Put those rows, plus the 4 extra fields (rub.,
> onstock, sortorder, sortorder) in a tmp table
> (biggest number in the Profile)
> 4. Sort the rows (could not use any index, since
> it needed to find the rows by "id") -- this was
> only 70ms.
>
> I would say that key_buffer_size = 100M is about
> right for your 1GB of RAM, and the table sizes.
> Probably no other tunables matter.
It is already configured for 348 MB =(
in my.cnf:
key_buffer = 384M
full section in my.cnf:
# The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
#table_cache = 512
table_cache = 1024
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
query_cache_type = 2
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 2
long_query_time = 1
slow_query_log = 1
#charset setup
character-set-server=cp1251
collation-server=cp1251_general_ci
skip-character-set-client-handshake
But it that case there is may be not enough physical RAM. So we ordered 1 GB RAM and we will recive it in 1 week. Will see if it helps
>
> Do you need all 7000 rows? Do you need all the
> columns if 'item'?
No, sure not. It's classical internet-shop:
First you open category and then you want to see items inside this category.
And page navigator.
And this query show all items from current and all subcategories through full tree.
But only first 30 items shown. So i just need first 30 records and total records count;
The fastest way to do it i found - is to get these rows on client (php script) side.
>
> One trick that _sometimes_ helps in a situation
> like this: Fetch the ids, then fetch the rows.
>
> SELECT i2. * ,
> 'rub.' currency,
> i2.enabled & ( i2.quantity >1 ) AS onstock
> FROM item i2,
> ( SELECT DISTINCT i.id
> FROM item i, category_item ic, category_childs
> cc
> WHERE i.id = ic.item_id
> AND ic.category_id = cc.category_id
> AND cc.parent_id =6880
> ORDER BY i.sale DESC , i.enabled DESC ,
> cc.sortorder, ic.sortorder
> ) i1
> WHERE i2.id = i1.id
>
WOW! It's GREAT! It really helped! 2x - 3x times faster! =)
starting 0.000020
checking query
cache for query 0.000136
Opening tables 0.000029
System lock 0.000012
Table lock 0.000099
optimizing 0.000023
statistics 0.000109
preparing 0.000032
Creating tmp
table 0.000034
executing 0.000006
Copying to tmp
table 0.154495
Sorting result 0.004748
Sending data 0.004075
removing tmp
table 0.000059
Sending data 0.000018
init 0.000069
optimizing 0.000019
statistics 0.000043
preparing 0.000020
executing 0.000006
Sending data 0.001440
end 0.000010
query end 0.000007
freeing items 0.000048
removing tmp
table 0.000034
closing tables 0.000015
logging slow
query 0.000006
cleaning up 0.000009
Total query time avg ~0.200 (instead of 0.500)
it's about from 0.400 to 0.130 ms for executing =) It's much better! Thank you!
> That way the "tmp" table has less to haul around,
> possibly saving enough to speed up the overall
> query. Also, it may lead to using a MEMORY tmp
> table, instead of MyISAM (and possibly disk).
>
> Aha -- `description` text -- this prevents the use
> of MEMORY for the tmp table.
>
May be send this field to separete (new) table and query it when we need it?
And another one. At least we need short_descr field (varchar(125)) for selecting short descriptions for items. Can Mysql use it for MEM temporary tables?
> Another tidbit -- don't use BIGINT unless your
> really need to. It takes 8 bytes; INT takes only
> 4. Smaller -> more cacheable -> fewer disk hits
> -> faster.
Changed. Now using only int(11) with id's =)
>
> If my suggested rewrite of the query provides the
> desired rows, I would be quite interested to see
> what the Profile looks like.
So i changed all bigint to int and specified fields for query:
so it became (i think mysql can create memory tmp table):
SELECT i2.`id` , i2.`vendor_id` , i2.`name` , i2.`price` , i2.`enabled` , i2.`quantity` , i2.`main_pic` , i2.`only_selfdelivery` , i2.`sale` , 'rub.'currency, i2.enabled & ( i2.quantity >1 ) AS onstock
FROM item i2, (
SELECT DISTINCT i.id
FROM item i, category_item ic, category_childs cc
WHERE i.id = ic.item_id
AND ic.category_id = cc.category_id
AND cc.parent_id =6880
ORDER BY i.sale DESC , i.enabled DESC , cc.sortorder, ic.sortorder
)i1
WHERE i2.id = i1.id
So profiling:
starting 0.000021
checking query cache
for query 0.000169
Opening tables 0.000032
System lock 0.000011
Table lock 0.000097
optimizing 0.000021
statistics 0.000099
preparing 0.000049
Creating tmp table 0.000032
executing 0.000005
Copying to tmp table 0.125292
Sorting result 0.005317
Sending data 0.004170
removing tmp table 0.000052
Sending data 0.000017
init 0.000054
optimizing 0.000017
statistics 0.000182
preparing 0.000022
executing 0.000007
Sending data 0.000758
end 0.000011
query end 0.000006
freeing items 0.000049
removing tmp table 0.000032
closing tables 0.000015
logging slow query 0.000006
cleaning up 0.000008
Explain plan:
mysql> EXPLAIN SELECT i2.`id` , i2.`vendor_id` , i2.`name` , i2.`price` , i2.`enabled` , i2.`quantity` , i2.`main_pic` , i2.`only_selfdelivery` , i2.`sale` , 'rub.'currency, i2.enabled & ( i2.quantity >1 ) AS onstock
-> FROM item i2, (
->
-> SELECT DISTINCT i.id
-> FROM item i, category_item ic, category_childs cc
-> WHERE i.id = ic.item_id
-> AND ic.category_id = cc.category_id
-> AND cc.parent_id =6880
-> ORDER BY i.sale DESC , i.enabled DESC , cc.sortorder, ic.sortorder
-> )i1
-> WHERE i2.id = i1.id;
+----+-------------+------------+--------+-----------------+-----------+---------+--------------------------+------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+-----------------+-----------+---------+--------------------------+------+---------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 8272 | |
| 1 | PRIMARY | i2 | eq_ref | PRIMARY | PRIMARY | 4 | i1.id | 1 | |
| 2 | DERIVED | cc | ref | parent_id | parent_id | 4 | | 247 | Using temporary; Using filesort |
| 2 | DERIVED | ic | ref | PRIMARY,item_id | PRIMARY | 4 | podarkoff.cc.category_id | 33 | |
| 2 | DERIVED | i | eq_ref | PRIMARY | PRIMARY | 4 | podarkoff.ic.item_id | 1 | |
+----+-------------+------------+--------+-----------------+-----------+---------+--------------------------+------+---------------------------------+
Now it is working really faster! Thank you!
If you have any ideas about reducing result's record count you're welcome!
Thank's for your help!