Rick James Wrote:
-------------------------------------------------------
> Use utf8 if you need it; don't use it you don't.
> In particular, you _probably_ should not use it
> for `hash`. CHAR(40) in utf8 always takes 120
> bytes; in latin1 or ascii, it takes only 40
> bytes.
yes, i have switched again to unicode. i was finding the issue with slower updates and doing some twicking. anyways now its set to ascii
> Do you have other tables that JOIN to this one via
> `record_num`?
for results queries , No.
but for content/post pages , Yes.
e.g.
SELECT hash,
title,
og_name,
keywords,
files_count,
more_files,
files,
category,
size,
leechers,
completed,
seeders,
creation_date,
upload_date,
last_updated,
vote_up,
vote_down,
comments_count,
imdb,
content.verified,
uploader,
anonymous,
content.record_num ,
users.username as username ,
content_description.description as description
FROM content
LEFT JOIN users
ON users.record_num = content.uploader
LEFT JOIN content_description
ON content_description.content = content.record_num
WHERE content.record_num = ? AND content.enabled = 1 LIMIT 0,1
> Is the 6.5M rows a complete replacement? (In
> which case, I will give one recommendation.)
> Or is it some new, some changed data? (In which
> case, another.)
No, not complete replacements, only fields seeders, leechers, verified, category and last_updated, which gets updated daily .
> Are you using LOAD DATA? INSERT one row at a
> time? Other?
>
i am runnign this query
UPDATE content
SET
seeders = '$seeders' ,
leechers = '$leechers' ,
verified = '$verified' ,
category = '$category' ,
last_updated = NOW()
WHERE
hash = '$info_hash'
for each hash i.e. record , while updating 6.5 million rows daily once. now i understand ow stupid it is , i guess, i should try load data by creating some supprted file format file and then pulllin it using load data.
> The Query Cache is useful only for relatively
> static tables. Perhaps this is static except for
> once a day?
the table is not that static i guess,
1)records gets added and updated evry minute by cronjob . approx, 10-50 records / minute gets inserteed and then updated.
2)around 10k records are updated once per hour, (only fields seeders, leechers, verified, category and last_updated)
3) once a day 6.5 mil records gets updated. sometimes , twice a day. (only fields seeders, leechers, verified, category and last_updated)
> Oh, pagination. My blog on that covers many of
> your problems:
>
http://mysql.rjweb.org/doc.php/pagination
> WHERE enabled = 1 ORDER BY upload_date DESC LIMIT
> 0,25
> is an excellent candidate for what my blog
> suggests.
checking it now,
> There are over 6M 'enabled' records. Simply
> recompute that daily, then format the response as
> "> 6 million", not "exactly 6,686,811".
this is great idea, you mean count it once a day and the store it in another table i guess.,
>
> SELECT count(*) as total_results
> FROM content
> WHERE content.enabled = 1
> AND category BETWEEN 400 AND 420
> AND upload_date >= '2014-08-08'
> AND verified = 1
> [/c0de]
> can be sped up slightly by putting the "= const"
> fields first. That is:
> INDEX(enabled, verified, category, upload_date)
> (The first two columns can be in either order.
> It's unclear what the better order for the last
> two is.)
> If you turn the category into an IN, _and_ your
> version has MRR, you _might_ get more speed.
i can turn the category into an IN ,
reading more about MRR.
> See my indexing cookbook:
> http://mysql.rjweb.org/doc.php/index_cookbook_mysq
> l
>
looking it .
> Queries 1,2,3 from pt-query-digest smell like
> manually run things, not production queries. Some
> involve reading 6M rows, which seems unlikely.
reading 6M rows , i guess those are queries fired by sphinx indexer ,as it requres to laod all data to create index.
Thanks for your time.