MySQL Forums
Forum List  »  InnoDB

Re: INSERT/UPDATE within transactions (autocommit=0)
Posted by: Alexander Golubowitsch
Date: July 28, 2010 01:03PM

Hi Rick,

thank you for your reply!

Quote

A bad combination:

`id` bigint(20) unsigned NOT NULL auto_increment,
PRIMARY KEY (`id`,`created`),

I worry that it will cause trouble with id not being unique.

Hope I'm interpreting you correctly:
On the _prop_ tables it would be a problem in theory, but: we only consider rows that were not `created` in the future and are not marked as `deleted` - unless the property is defined as multiple.

The versioning causes a lot of overhead and we are going to remove it step-by-step.

Quote

How much RAM do you have? 64-bit MySQL?

8GB RAM, 64-bit.

Quote

A little speed could be had by turning this off:
innodb doublewrite

We have discussed that but are afraid it might be a bit risky. Are we overly careful here?

Quote

FOREIGN KEY has overhead -- a check into the other table.

That hits on INSERTs and UPDATEs, of course. We still consider it valuable enough right now, but I'll keep that in mind and see whether software can implement checks in the future.

Quote

Anything in the slowlog?

We'll re-activate it in an hour or so when MySQL is restarted because of a few config changes. I had to set the timer to 10 secs though, see below.

Quote

How many rows in the tables?

In case of `product` 340k, `product_prop_name` 275k which is also the actual amount of objects of that type.
But the problem occurs even on tables for e.g. just 5k objects.


What we have done so far / will do within an hour or so:

- Set innodb_flush_log_at_trx_commit from 2 to 0:
Appears to give some performance at a cost that _seems_ bearable.
- Increased innodb_log_file_size from 250M to 512M
Blind shot, current value should be OK. Any thoughts?

- Reduce filesystem load whereever possible

- Reduce INSERT load on other tables:
We have a `useragents` (INSERT only) and a `session` (SELECT / INSERT IGNORE) table, both MyISAM, and hit on every page request.
I am considering options to replace the `useragents` functionality with a solution that stores rows to be INSERTed in a MEMORY table and "flushes" that to disk at once as soon as it hits a certain amount of rows.
Also considering making both of them InnoDB - their MyISAM state is a bit of a relic anyway.

Any thoughts in general regarding having both engines at work in parallel?

- We have a `product_quicksearch` table that "caches" (de-normalizes, N rows) the relevant data from the ~70 `product` property/options tables, which is also MyISAM because we need 2 FULLTEXT indexes on it. Convertig that into InnoDB is thus impossible. Queries to that table are also the reason we cannot set the slow query timer any lower - the logs would get flooded. That table is under some load from the frontend because it is the primary source of results for the product catalogue. Some queries will be served from query cache, but under certain circumstances queries to large result sets incl. sorting may run for longer than the usual slow query limit.


Thanks again for your help so far!

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: INSERT/UPDATE within transactions (autocommit=0)
2012
July 28, 2010 01:03PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.