MySQL Forums
Forum List  »  Perl

Re: DBD::mysql::st execute failed: Lost connection to MySQL server during query
Posted by: Philip Johnson
Date: September 29, 2011 08:09AM


Name : Inbound
Engine : InnoDB
Version : 10
Row_format : Compact
Rows : 17372984
Avg_row_length : 1127
Data_length : 19583205376
Max_data_length : 0
Index_length : 1856684032
Data_free : 120649154560
Auto_increment : 261465951
Create_time : 2011-06-07 13:53:23
Update_time :
Check_time :
Collation : latin1_swedish_ci
Checksum :
Create_options :
Comment :


What percent of the rows have (status = 'n')? Does this percentage fluctuate?

Less then 0.001 percent. At 17 million rows, the highest number of 'waiting to process' (status = 'n') rows in this table at a time is typically less then 5, , usually 0. During times that mysql has 'locked up' as described before (note those locks were only for writes, reads could still be performed fine, so looked like a deadlock issue even though none was reported), it only gets up to a few hundred 'n' rows before the lock up resolves itself and we process the backlog within a few seconds.


How many rows are returned by "SELECT FLOOR..."

Always 1, as buried 5 functions down is MIN(Time), which will make MySQL aggregate all results.


3 seconds for
> UPDATE InterfaceStatus SET pid = '24221', Status = 'Got ACK', Debug = '1', LastTransaction = NOW(), Transaction = '250418408' WHERE Process = 'CERME_ADT';
This seems unlikely, since Process is the PRIMARY KEY. However, it can happen if InnoDB is too busy with other stuff. Or is this part of a big transaction?

That statement was not part of a larger transaction (with auto_commit=1, it basically is wrapped in its own personal statement). I agree that the delay seems odd, and can say that when those happen, they do tend to happen in groups (in the example I provided, 9 queries all of which should have had sub-second times all locked for multiple seconds until being executed all at once). It does seem like innodb is getting busy with some other work and basically stops processing queries for several seconds, then goes back to normal processing again.

Is there anything innodb does that could cause it to pause like that for a few seconds? Flushing change logs to disc, automatic consistency checks, optimizing the query log, anything like that? The only thing I can think of is not the entirity of all my innodb tables would fit into memory at once, but a small enough amount of them are actually being queried and/or updated that the active pages should all fit with gigs of cache space to spare. Maybe indexes being loaded? There's more then 9 gigs of index data for the live interface schema alone...

Options: ReplyQuote

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.