MySQL Forums
Forum List  »  General

Re: Random data loss, less than 0.1% of the time - how to prevent it
Posted by: Rick James
Date: March 25, 2014 07:19PM

Let me see if I can summarize...
You have a single-row INSERT, wrapped in a try-catch mechanism, something like

autocommit = 1; -- Is this the case??

try eval {
$sql = "INSERT ..."; -- Only one row
}
catch ...

1. The INSERT hits a deadlock, and the row is not inserted;
2. But the "catch" is not executed.
3. There are 3 UNIQUE indexes, includeing the AUTO_INCREMENT PRIMARY KEY, which is not provided in the INSERT (so that a new value will be automatically generated).

----

This might be the simplest deadlock to easily generate:
CREATE TABLE foo ( x INT PRIMARY KEY, y CHAR(1) ) ENGINE=InnoDB;
INSERT INTO foo VALUES (1, 'a');
INSERT INTO foo VALUES (2, 'b');
Process 1:
BEGIN;
UPDATE foo SET y='aa' WHERE x=1;
SELECT SLEEP(10); -- to give you time to start process 2 before this one COMMITs.
UPDATE foo SET y='bb' WHERE x=2;
COMMIT;
Process 2:
BEGIN;
UPDATE foo SET y='bbb' WHERE x=2; -- Note that the order is different
SELECT SLEEP(10);
UPDATE foo SET y='aaa' WHERE x=1;
COMMIT;

There may be a way to simulate the deadlock you are getting, but I don't know what it is. I'm suggesting the above deadlock, just to see if Exception::Class::TryCatch and DBI will actually catch the deadlock error.

Options: ReplyQuote


Subject
Written By
Posted
Re: Random data loss, less than 0.1% of the time - how to prevent it
March 25, 2014 07:19PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.