MySQL Forums
Forum List  »  Performance

Re: Utilize maximum system resources for better performance
Posted by: Ronald Brown
Date: September 19, 2015 01:18AM

> With InnoDB, you are better off chunking on the
> PRIMARY KEY(record_num), not a secondary key
> (enabled, record_num).
> Plan A: Simply use blocks of 50000 record_nums,
> but let the WHERE clause toss out any that have
> enabled != 1. Hence, the chunks will be variable
> in size, possibly even 0.

this looks promosing and i have implemeneted it and results are great.

script completed in '6.5349374016126' minutes

> Did you specify "ROW_FORMAT=FIXED"? If so, why?

nope i didnt specify it , i convrted this table from myisam to innodb , so i guess, that made it.
should i remove/change it ? i tried reading some docs but didnt understand so left it as it is.

> Back to the updating and the "parallel table"
> approach.
> Your rows average 1366 bytes wide. If so, then
> whenever you update a row, it has to shovel all
> 1366 bytes around to do the update. If, instead,
> you had a narrow parallel table of, say, 68 bytes,
> it could be updated with one-twentieth the
> effort.

looking in to it, as when 6mil rows gets updated daily , only their seeders,leechers,verified, last_updated fields gets updated

> When you are fetching in chunks, what do you do
> with the results? Are you really fetching
> "record_num, title"?

this i query i am using
SELECT record_num, title, last_updated, enabled FROM content ORDER BY record_num ASC LIMIT $from , $subset_count

for generating sitemaps in 50k chunks.

i am doing select on record_num, title, last_updated, enabled
and then generating xml sitemaps.

Options: ReplyQuote

Written By
Re: Utilize maximum system resources for better performance
September 19, 2015 01:18AM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.