Re: Monitoring large resultsets
Sure, looking up an entry in the Query cache is efficiently done using a Hash. But the QC is inefficiently scanned whenever a write occurs. This "scan" finds all cached resultsets for that table, and purges them from the QC. This is the costly action. In 2GB, you can store a lot of "linked lists". Note that a linked list can be scanned only linearly.
> So global eXclusive lock is used while invalidation take place
That means that any other SELECTs -- regardless of what table(s) they are touching -- are stalled waiting for the invalidation to finish.
You are decreasing the QC to 1G, but that is still a lot of space being stolen from other caches.
> 9GB available between mysql/php/apache/other
Let's say 7GB for MySQL. Minus the 1GB QC, leaves 6GB for MyISAM (and 0 for InnoDB). That leaves 1200M for key_buffer (caching MyISAM indexes) plus 4800M for the OS for caching data.
Look at the ratio: Qcache_hits / Qcache_inserts
| Qcache_hits | 405394 |
| Qcache_inserts | 174847 |
That's more than 2:1; maybe the QC is worth having.
Vertical partitioning is useful IF it cuts down on disk hits. My mantra for big systems "Count the disk hits".
> | 1 | SIMPLE | jos_content | range | idx_section,idx_state,idx_catid | idx_state | 1 | NULL | 38791 | Using where |
If the average row is 14K, then 38791 * 14K = 500MB is a lot of stuff to shovel through. (That's even before getting to the other two tables!) That _may_ involve creating a temp table of 500MB.
> and free memory gets as low as a few megs
If it ever gets so low as to cause swapping, that is _really_ bad on performance.
> Again a lot of pruning so free memory goes up and down a lot.
MySQL is not good at returning space to the OS, so I am worried that you are seeing swapping, not pruning.
> cache hits is over 99%
Where did you get that number? It is probably the key_buffer; the ratios here are excellent:
| Key_read_requests | 36880649 |
| Key_reads | 19881 |
| Key_write_requests | 1065534 |
| Key_writes | 46201 |
This worries me:
> | Max_used_connections | 151 |
> you can split 'created' into two i.e.
> createDate DATE
> createdTime TIME
Almost always it is BAD to do so. It makes other queries much worse, sometimes un-optimizable.
> 8 cpus, would handle a few concurrent queries, even with poor coding. So there is something else going on.
True, it should be able to handle 1000 'simple' queries per second. Inefficient queries are another matter.
Umair, find the Fulltext forum.