Re: Partitioning for improving "INSERT .. ON DUPLICATE KEY UPDATE"
> The table in question has two fields that are unique keys
Please provide SHOW CREATE TABLE; there could be subtle things to take not of.
> Toward the end it has slowed down to ~1 per second...
Are you referring to INSERTs per second?
Each INSERT must update each index. In particular, it must check each UNIQUE index.
> Could a partition help with the degraded performance?
Maybe; let's see CREATE TABLE, plus some clues on what order the rows are being inserted.
> thinking partitions would help with this scenario
There are only a few scenarios where PARTITIONing helps. (It usually does not "hurt".) Read more here:
http://mysql.rjweb.org/doc.php/partitionmaint
> unique key date (text as yyyy-mm-dd)
> I'm performing ~7 million "Insert
Do you have 7 million dates?? Oh, you are probably using IODKU to update values. Ah, a standard tally, summary, etc of data? Well, we need to discuss how to do that _much_ faster. Here is some more reading:
http://mysql.rjweb.org/doc.php/datawarehouse
and especially the link hanging off it about "Summary Tables".
I'm guessing that we can change your "melting down" system into a "nearly idle" system.
OK, I'm getting ahead of myself. I really need a feel for the 1000 inserts you want to do in one second. Like... Are they hitting the same, say, 15 rows over and over? Or are they hitting, say, 900 different rows?
If 15, then collect the 1000 into another table (or file or program), summarize into 15 IODKUs to do, then do them. Repeat.
Note: The summarization (if using a temp table) would use something like
INSERT INTO real_table
SELECT COUNT(*), SUM(...), ...
FROM tmp table
GROUP BY key1, key2
ON DUPLICATE KEY UPDATE ...
If 900, then you might need to gather 10K or 100K before summarizing and applying. This _may_ have the drawback of not updating the 'real' table but once a minute -- is this acceptable?
Note: The PARTITION question is mostly independent of the summarization question. I'll be happy to discuss both further.