MySQL Forums
Forum List  »  InnoDB

Re: Need help to improve innodb performance
Posted by: Rick James
Date: March 31, 2009 12:20AM

InnoDB PRIMARY KEYs --
* InnoDB must have a PRIMARY KEY.
* If you provide one, it uses that.
* If you have a UNIQUE key, it may use that.
* Otherwise one will be provided for you. It is usually sub-optimal. (It will be a hidden 8-byte BIGINT AUTO_INCREMENT.)
* The PRIMARY KEY is "clustered" with the data. The data and the PK are in the same BTree. There is virtually no extra space for the PK, unlike all secondary keys.

InnoDB Secondary keys (all other indexes) --
* They live in their own BTree.
* The leaf nodes include the PRIMARY KEY value (if not already included in the field(s) of the secondary key).

I bring out this lecture because you have no PK. The table `keys` has
KEY `IN_LOG_ID` (`log_root_id`,`log_sub_id`)
If that compound index is really UNIQUE, then I strongly recommend you make it PRIMARY KEY. This will save space, and speed things up. Ditto for the other table.

If it is not unique, then I recommend (barring additional info) you create a table to map id <-> (`log_root_id`,`log_sub_id`):
CREATE TABLE log_map (
  log_id INT UNSIGNED AUTO_INCREMENT NOT NULL,
  log_root_id ...,
  log_sub_id ...,
  PRIMARY KEY(log_id),
  UNIQUE (log_root_id, log_sub_id)
);
and then replace the pair of fields in the other tables with log_id. Note that this will effectively save space. It will lead to some new JOINs in your SELECTs, but (hopefully) they will perform at least as well as before.

Saving space -> more stuff in cache -> faster.

Consider similar things for hostname, username, etc.

| innodb_buffer_pool_size | 8589934592 | -- You have 16GB of RAM; can you spare more for the buffer pool without swapping?

| innodb_doublewrite | ON | -- This is good protection, but it costs disk I/O.

| innodb_file_io_threads | 32 | -- Not sure if this might be too high.

| innodb_flush_log_at_trx_commit | 2 | -- This one has impact on performance; read the specs carefully.

| innodb_flush_method | O_DIRECT | -- Good, especially if you have good buffering in the Disk controller.

You did not mention the disk subsystem. I would hope you have RAID with at least 'striping'. The more drives, the more disk I/O per second you can do. And the Transaction log, etc need disk I/O.

The table 'keys' smells like key-value pairs; is it? If so, and if you have no index on `key`, then I see no reason for storing them separately -- instead collect all the key-value pairs, marshal them (I like JSON), compress the blob, and store one row, not many. This will increase the _effective_ qps of your system.

'outbound' averages 1.7KB? And it has several medium-to-large VARCHARs? Consider compressing any varchars that are tend to be large. Do the compression in the application, not with the compress() function in MySQL. This will cut down the network traffic, and will shrink the disk footprint, etc.

Keep an eye on CPU utilization and I/O usage -- we need to concentrate on whichever you are running out of.

If you implement all of these, you might double your effective throughput. Have you already batched the INSERTs to get 1650 & 1350?

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: Need help to improve innodb performance
2725
March 31, 2009 12:20AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.