Re: How to get acceptable/repeatable performance on huge table
Thank you for reply!
SHOW GLOBAL STATUS LIKE 'innodb_%';
+---------------------------------------+----------------+
| Variable_name | Value |
+---------------------------------------+----------------+
| Innodb_buffer_pool_pages_data | 2907628 |
| Innodb_buffer_pool_pages_dirty | 84275 |
| Innodb_buffer_pool_pages_flushed | 509726094 |
| Innodb_buffer_pool_pages_free | 0 |
| Innodb_buffer_pool_pages_misc | 28372 |
| Innodb_buffer_pool_pages_total | 2936000 |
| Innodb_buffer_pool_read_ahead_rnd | 0 |
| Innodb_buffer_pool_read_ahead | 108473641 |
| Innodb_buffer_pool_read_ahead_evicted | 28151162 |
| Innodb_buffer_pool_read_requests | 109076746683 |
| Innodb_buffer_pool_reads | 354376589 |
| Innodb_buffer_pool_wait_free | 0 |
| Innodb_buffer_pool_write_requests | 14449590558 |
| Innodb_data_fsyncs | 42471054 |
| Innodb_data_pending_fsyncs | 0 |
| Innodb_data_pending_reads | 1 |
| Innodb_data_pending_writes | 0 |
| Innodb_data_read | 7583311269888 |
| Innodb_data_reads | 462851509 |
| Innodb_data_writes | 1160252345 |
| Innodb_data_written | 10329908941312 |
| Innodb_dblwr_pages_written | 254863303 |
| Innodb_dblwr_writes | 16872647 |
| Innodb_have_atomic_builtins | ON |
| Innodb_log_waits | 0 |
| Innodb_log_write_requests | 3847430165 |
| Innodb_log_writes | 887538768 |
| Innodb_os_log_fsyncs | 4423146 |
| Innodb_os_log_pending_fsyncs | 0 |
| Innodb_os_log_pending_writes | 0 |
| Innodb_os_log_written | 1978353846784 |
| Innodb_page_size | 16384 |
| Innodb_pages_created | 22289742 |
| Innodb_pages_read | 462851411 |
| Innodb_pages_written | 254863303 |
| Innodb_row_lock_current_waits | 0 |
| Innodb_row_lock_time | 186533 |
| Innodb_row_lock_time_avg | 46633 |
| Innodb_row_lock_time_max | 51502 |
| Innodb_row_lock_waits | 4 |
| Innodb_rows_deleted | 843043181 |
| Innodb_rows_inserted | 972839870 |
| Innodb_rows_read | 91448487352 |
| Innodb_rows_updated | 1283492047 |
| Innodb_truncated_status_writes | 0 |
+---------------------------------------+----------------+
Mysql uses only the index anyway, so the other column sizes should be irrelevant(?):
mysql> explain select adId, count(userUId) from UserVisits where adId < 25000 group by adId;
+----+-------------+------------+-------+---------------+---------+---------+------+-----------+-----------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+-------+---------------+---------+---------+------+-----------+-----------------------------------------------------------+
| 1 | SIMPLE | UserVisits | index | NULL | PRIMARY | 12 | NULL | 248411698 | Using where; Using index; Using temporary; Using filesort |
+----+-------------+------------+-------+---------------+---------+---------+------+-----------+-----------------------------------------------------------+
We need (and have) very fast access on Uid, that is the main Select on the table.
We could switch adId to mediumint, true, but gain would likely be small.
Subject
Views
Written By
Posted
2158
August 30, 2012 03:07AM
1070
August 30, 2012 03:55PM
Re: How to get acceptable/repeatable performance on huge table
913
August 31, 2012 05:22AM
1062
August 31, 2012 05:42AM
927
August 31, 2012 05:43AM
1017
August 31, 2012 05:54AM
1042
August 31, 2012 07:00AM
1003
August 31, 2012 07:30AM
986
August 31, 2012 07:44AM
1000
September 02, 2012 09:27AM
1071
September 03, 2012 03:17AM
893
August 31, 2012 08:12AM
1016
September 01, 2012 06:39PM
911
September 02, 2012 09:25AM
836
September 02, 2012 09:32AM
995
September 02, 2012 11:22AM
856
August 31, 2012 04:47AM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.