Re: temp table or query for API
Posted by: Rick James
Date: June 01, 2013 12:15PM

> Drupal uses a new table for each field.
I am feeling ill.

0.00662 slow queries per second is _very_ low.

To investigate further, please provide information from the slowlog, plus
SHOW CREATE TABLE
SHOW TABLE STATUS
If there is nothing in the slowlog, set long_query_time to 1 (and wait).

Also provide the output from EXPLAIN on that SELECT. Please try to make it so that the fields line up for readability. Put [ code ] and [ / code ] (without spaces) around the output if you have it in text format.
I am especially curious as to whether the optimizer can start with
AND media_type.media_type = "video"
or whether it honors the "LEFT" and delays testing media_type.

This is the root of poor performance on _this_ SELECT:
AND node.type = "media"
AND media_type.media_type = "video"
Since those two fields are in different tables, the query is inherently inefficient. (There is no efficient way to filter on fields in separate tables.) And there is no "good" reason (I exclude Drupal's 'reason') to separate type and media_type into two different tables. And to have an extra table in between the two makes it worse!

Meanwhile, I have taken note that Drupal is a ghastly product and should be shunned. On the plus side, MySQL can perform terrible queries fast enough for most people to not be bothered. But this tells me that Drupal cannot scale beyond a tiny amount of data.

What is "API Response Time"? The values seem to be under 3ms. That is fast enough to be 'not a problem' to the user, and fast enough to imply efficiency in the database (if the database is touched).

The performance problem will hit hard when the data becomes too big to cache in RAM. See this for what you can do to keep that cliff at bay:
http://mysql.rjweb.org/doc.php/memory

Note that this schema seems to go beyond textbook normalization. Ordinary normalization usually (not always) can be made performant. Over-normalization is a common cause of performance problems in huge databases (millions or billions rows). I worry that you will have performance problems with only thousands of rows.

Bottom line... If you hope your site will eventually have more than a few thousand rows in `node` (and/or other tables), you will find that performance will decline to the point of being unusable. You must rethink the schema; there is no way to enhance the hardware, tune MySQL, or otherwise speed up this application without getting Drupal out of the way.

Options: ReplyQuote




Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.