Date: January 12, 2010 10:20AM
I've done a few Google searches on this topic but haven't really found any good answers so I figured I'd ask on here:
I've got a large (710,000 row, 51MB) database that's growing every day. I'm concerned about performance and therefore size. I have one column that is a varchar(250) and has about 30%-40% duplicate data. I believe I could shave a lot of size off this database with some sort of column based compression. Is there any way to do this with a MySQL command? If not, would it buy me any speed or database size reduction to write a PHP script to go through the database, find duplicates in that column, create a key (such as 1,2,3 ect...), replace the duplicate values with that key, and make a new database with the key relations?
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.