MySQL Forums
Forum List  »  Performance

Re: Utilize maximum system resources for better performance
Posted by: Ronald Brown
Date: August 09, 2015 12:40AM

Rick James Wrote:
> Are content and content_description one-to-one?

yes, they are one to one. but not all records have description so its empty now.

> Are Users and content_description currently
> empty?

users contains test users, 2-3 rows, and content_description is empty. and wont be having much more rows in real soon .

> What are the indexes of Users and
> content_description?

PRIMARY KEY (`record_num`),
UNIQUE KEY `username` (`username`),
UNIQUE KEY `email` (`email`),
KEY `password` (`password`)

> | innodb_buffer_pool_size             | 1073741824
>     |
> | key_buffer_size                     | 2411724800
>     |

that is old configuration , curently its like

| innodb_buffer_pool_size | 25769803776 |
| key_buffer_size | 104857600 |

> and you have 32GB of RAM?

yes, 32 GB RAM
total used free shared buff/cache available
Mem: 31G 20G 231M 3.4G 10G 6.7G
Swap: 15G 80M 15G

> I see multiple notes in this thread on those two
> settings; I recommend
> innodb_buffer_pool_size = 10G
> key_buffer_size = 500M
> until you convert all tables to InnoDB. At that
> point, change to
> innodb_buffer_pool_size = 22G
> key_buffer_size = 40M

should i lower innodb_buffer_pool_size from 24G to 10G ? or leave it as it is.?

> > No, not complete replacements, only fields
> seeders, leechers, verified, category and
> last_updated, which gets updated daily .
> Would it make sense to have those columns in a
> separate table? This would make the nightly
> processing simpler and more efficient?

yes, if there is no other option, then that can be done,

> 6.5M UPDATE statements must take a long time.
> LOAD DATA into a `new` table would be orders of
> magnitude faster, then
> RENAME TABLE `real` TO `old`, `new` TO `real`;
> DROP TABLE `old`;
> would instantly and atomically update that table
> -- _zero_ downtime.

i just did

update rows,

and it took 19.xx minutes for complete 6.5 mil updates while QC disabled.
and 21.xx minutes when QC enabled.

but is this vaible option ? disabling autocomit and updating on live site ?

there is one more issue with LOAD DATA and swap tables.
6.5 million records are not all records , so there are other records which wont be updated .
so bad way around would be , create a new table load data in this table, then take inserted values and update those in main table., naah its same as updating from any other source, just thinking out loud.

> (OK, I don't know if there is an issue with the
> hash.)

what issue ?

> The QC, at least for that table, will be purged
> every _minute_ because of the cron job. If users
> fetch the same stuff repeatedly in 59 seconds,
> then the QC is somewhat useful.

imho, QC is usedul as there will many reads in live site,
and i checked this one query while disabling QC and it took 6 seonds every time .
but if i enable query cache it takes 6 seconds for first time and later 0.00

SELECT * FROM content WHERE enabled = 1 ORDER BY seeders DESC , upload_date DESC LIMIT 0,25

> > you mean count it once a day and the store it in
> another table i guess.
> Yes.

okie., that can be done.

> > reading 6M rows , i guess those are queries
> fired by sphinx indexer ,as it requres to laod all
> data to create index.
> Ouch. Do you know when that is happening? Does
> it have an adverse impact on other things?

its temporary issue, sooner gonna switch to realtime index in sphinx,
but there will be select 6 mil records once a day for creating sitemaps.

Thanks for your time.

Options: ReplyQuote

Written By
Re: Utilize maximum system resources for better performance
August 09, 2015 12:40AM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.