How much RAM do you have? See:
http://mysql.rjweb.org/doc.php/memory
>8GB RAM
Are there likely to be a lot of duplicates in that 1000?
>No duplicates amongst those 1000 and those are sorted by the key
The table will become over 100GB? Random lookups by username or userid will be costly because it will hit the disk an average of once per partition. (Hmmmm... Maybe weekly is a bad idea.) How often are the SELECTs?
>Around 100,000 selects per day but yes need to use the partition key in where clause to improve performance.
About partitioning key, 'lastupd' is changed very frequently so it can't be a good partitioning field,
planning to add a new column 'part_id' tinyint which will be unique for a week/fortnight(depending on the duration of the partition ,) .
How about using list partitioning on 'part_id' column?
All INSERTs,UPDATEs,SELECTS will use userID and 'part_id' in where clauses.
userID is no longer the primary key but userID + part_id is the key.
We will have one user entry per partition which results in some duplicate data but which means we can have snapshots of the user data among diff. partitions. Also we will have the counts per user per partition.
Adding this 'part_id' as the partitioning column only helps in easier truncation of old data, the same can be achieved using delete with the part_id in where clause (I assume drop partition is much faster than delete especially with huge data)