The engine expects to have to examine 22.5 million rows, that 10% will be removed by the treated=0 condition, and that no available index will speed things up.
This takes 10 mins on machine 1, 2 mins on machine 2? What besides OS are the diffs between the two machines?
If the average row is 250 bytes, that's 5GB of indexes & data. You can check that number with ...
select
concat(round(sum(table_rows)/1000000,2),'m') rows,
concat(round(sum(data_length)/(1024*1024*1024),2),'g') data,
concat(round(sum(index_length)/(1024*1024*1024),2),'g') idx,
concat(round(sum(data_length+index_length)/(1024*1024*1024),2),'g') total_size,
round(sum(index_length)/sum(data_length+index_length),2) idxfrac
from information_schema.tables
where table_schema= -- FILL IN YOUR DB NAME
and table_name='record';
[/code[
It does take a long time to read and return so much. But it's hard to imagine the MySQL engine taking 10 mins to just read 22 million rows unless there's a lot of swapping going on. It's also hard to imagine a reason to return all 22 million rows need in one go. Can you clarify?
If you disagree with the engine's estimates as given by Explain, you could lay in an index (id,treated), tell the engine to use it, and see if it's quicker.
Edited 1 time(s). Last edit at 05/23/2017 02:24PM by Peter Brawley.