MySQL Forums
Forum List  »  InnoDB

Re: Switch to file_per_table using a slave
Posted by: Rick James
Date: May 08, 2010 03:55PM

Where do you put the dump?
Do you have 250GB to spare?

Consider the following...
1. set up a second mysql on the one box. Use port 3307 (or something). Use a different directory. Use file-per-table. Probably have to use a different my.cnf!
2. mysqldump --port 3306 | mysql --port 3307

In other words, skip the backup for the night; instead create the new server that night. Hopefully this avoid turning the 12 hour dump into 24 hours. And it does not require more than the 250GB for the new copy (if you are doing it on Unix -- probably requires extra space on Windows). If you don't have a dual core box, this may be CPU bound, and delay because of that.

In the long run, you should really consider having a slave _instead_ of doing the 12-hour backup. And, if you want an off-site backup, then do it from the Slave, leaving the Master up for use.

Back to the original issue... Why do you want file-per-table? I ask because it may not give you what you want. (Yes, I do recommend file-per-table. But it may not be worth the conversion effort.)

I assume you intend to shrink ibdata1 in the process? If not, then this is all you need (though it may take 12 hours):
1. set file-per-table in my.cnf
2. restart mysql
3. foreach table do ALTER TABLE tbl ENGINE=InnoDB
(If that does not convert the table, then it may involve creating a new table, copying the data, RENAME TABLE, and drop old.)
No, you cannot simply delete the ibdata1 file after that -- it contains critical stuff.

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: Switch to file_per_table using a slave
1413
May 08, 2010 03:55PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.