Do you mean that I should remove all indexes I created for my queries
to run faster ? But then how do working real life examples using partitioning cope with uniqueness and query indexes ??? I am not sure I
follow you there.
Yeah, it is weird.
* Getting rid of all indexes -- bad move; every query hits every partition.
* No UNIQUE queries (but some non-unique queries) -- useful in some situations; but does not allow for AUTO_INCREMENT
Well, if you are changing to PARTITION, you can change BIGINT to TINYINT, and you could do something different about AUTO_INCREMENT.
Therefore I thought that partitioning would help drastically the application performance and keep my indexes much smaller
* Drastic performance improvement? Not likely. Some improvement, yes.
* Indexes much smaller? The "same" index will be almost identical in size...
Indexes are (in most cases) BTrees. A BTree starts with a root node; this links to (say) 100 subnodes; they each link to 100 sub-subnodes, etc, until the bottom layer. A million rows would need about 3 layers. A trillion rows would need about 6 layers. Each node is 1 "block" (1KB for MyISAM; 16KB for InnoDB).
If you split a table into 100 partitions, that's like removing the top layer, then putting the rest of the nodes into the various partitions. That is, partitioning shrank the "index size" by only one block.
So, a million-row plain table drills down 3 BTree layers.
A million-row table partitioned across 100 partitions picks the partition (liken that to the top layer of a BTree), then drills down 2 layers of BTree.
If ALL queries include "AND seg_id = N", then having ALL indexes _start_ with seg_id gives you virtually identical performance between plain table and table partitioned on seg_id.
If SOME queries need to reach into the table by other means, the story could be different.
Why have AUTO_INCREMENT if you are never using its uniqueness for reaching into the table?