MySQL Forums
Forum List  »  Performance

Re: Monitoring large resultsets
Posted by: Rick James
Date: July 13, 2012 07:50PM

Sure, looking up an entry in the Query cache is efficiently done using a Hash. But the QC is inefficiently scanned whenever a write occurs. This "scan" finds all cached resultsets for that table, and purges them from the QC. This is the costly action. In 2GB, you can store a lot of "linked lists". Note that a linked list can be scanned only linearly.

> So global eXclusive lock is used while invalidation take place
That means that any other SELECTs -- regardless of what table(s) they are touching -- are stalled waiting for the invalidation to finish.

You are decreasing the QC to 1G, but that is still a lot of space being stolen from other caches.
> 9GB available between mysql/php/apache/other
Let's say 7GB for MySQL. Minus the 1GB QC, leaves 6GB for MyISAM (and 0 for InnoDB). That leaves 1200M for key_buffer (caching MyISAM indexes) plus 4800M for the OS for caching data.

Look at the ratio: Qcache_hits / Qcache_inserts
| Qcache_hits | 405394 |
| Qcache_inserts | 174847 |
That's more than 2:1; maybe the QC is worth having.

Vertical partitioning is useful IF it cuts down on disk hits. My mantra for big systems "Count the disk hits".
> | 1 | SIMPLE | jos_content | range | idx_section,idx_state,idx_catid | idx_state | 1 | NULL | 38791 | Using where |
If the average row is 14K, then 38791 * 14K = 500MB is a lot of stuff to shovel through. (That's even before getting to the other two tables!) That _may_ involve creating a temp table of 500MB.

> and free memory gets as low as a few megs
If it ever gets so low as to cause swapping, that is _really_ bad on performance.

> Again a lot of pruning so free memory goes up and down a lot.
MySQL is not good at returning space to the OS, so I am worried that you are seeing swapping, not pruning.

> cache hits is over 99%
Where did you get that number? It is probably the key_buffer; the ratios here are excellent:
| Key_read_requests | 36880649 |
| Key_reads | 19881 |
| Key_write_requests | 1065534 |
| Key_writes | 46201 |

This worries me:
> | Max_used_connections | 151 |

> you can split 'created' into two i.e.
> createDate DATE
> createdTime TIME
Almost always it is BAD to do so. It makes other queries much worse, sometimes un-optimizable.

> 8 cpus, would handle a few concurrent queries, even with poor coding. So there is something else going on.
True, it should be able to handle 1000 'simple' queries per second. Inefficient queries are another matter.

Umair, find the Fulltext forum.

Options: ReplyQuote


Subject
Views
Written By
Posted
2046
July 10, 2012 10:26PM
1114
July 11, 2012 02:01AM
896
July 12, 2012 02:19AM
946
July 12, 2012 03:37AM
940
July 12, 2012 04:13AM
982
July 12, 2012 05:49AM
1113
July 12, 2012 07:02AM
Re: Monitoring large resultsets
932
July 13, 2012 07:50PM
918
July 14, 2012 02:43AM
900
July 12, 2012 08:13AM
897
July 12, 2012 08:55AM
981
July 12, 2012 12:12PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.