MySQL Forums
Forum List  »  General

Re: Random data loss, less than 0.1% of the time - how to prevent it
Posted by: Ted Byers
Date: March 22, 2014 04:06PM

Rick James Wrote:
-------------------------------------------------------
> > There is a unique index defined, but that, too,
> gets checked before any attempt is made to store
> data.
>
> (With SHOW CREATE TABLE, I would not have ask...)
> Which Engine are you using? If you are using
> InnoDB, is the 'check' and the INSERT in the same
> transaction (BEGIN...COMMIT)?
>
I use InnoDB.For the unique index, the values are provided by a service supplier, and are guaranteed to be unique (and actually, they look like they're generated by code designed to produce GUIDs). And, there is no possibility of trying to submit the same record twice. All of the sanity checking of the data is handled by my Perl code, before any attempt is made to store it.

I don't suppose there is a possibility that the autoincremented primary index could result in an attempt to insert two records with the same value if two attempts to store data happened too close together? (This, actually, is why I thought maybe some kind of buffering might be appropriate, so data isn't stored faster than MySQL can handle it. Perhaps I was concerned about something that isn't likely a problem?)

> That is, in between "checking" and INSERTing, can
> another thread sneak in and INSERT the value you
> just checked?
>
No.

> If you are doing this for "normalization", provide
> some more details; I may have some suggested code
> that is cleaner and more efficient, and definitely
> avoids deadlocks.

No, this is not for normalization. Rather, it is used to store transactions involving two parties.

> * Do you have batches of values to INSERT?
> * Are you getting back AUTO_INCREMENT ids? (A
> common thing to do in normalization.)

No. I don't need to as this is not for that kind of normalization. I use the unique index (whose value I already obtained from my service provider, and which is guaranteed to be unique) to relate the records in the two tables (two tables because the process involves two steps).

Thanks

Ted

Options: ReplyQuote


Subject
Written By
Posted
Re: Random data loss, less than 0.1% of the time - how to prevent it
March 22, 2014 04:06PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.