> How can I ensure that my "all" tables are Innodb? What verification method(s?) I have to use?
I don't think you can prevent a MyISAM table or MEMORY table from being CREATEd, even if you use:
http://dev.mysql.com/doc/refman/5.1/en/sql-mode.html#sqlmode_no_engine_substitution
MyISAM is required for the `mysql` database and MEMORY is used for some implicit temp tables.
This will list the non-InnoDB tables; you could run it periodically:
SELECT table_schema, table_name, engine
FROM information_schema.TABLES
WHERE engine != 'InnoDB'
AND table_schema != 'mysql'
> And it seems this whole plan worked
Yes, I did not see any "show stoppers"; my comments were minor improvements.
> implementing a backup strategy (weekly full, daily incremental),
I'm not a fan of incremental.
> probably at the Slave side (if possible) to prevent possible burden or blockings on the Master.
Yes. A Slave is a good place to take a backup.
However, if you direct "readonly" requests to the Slave to offload the Master, there will be some interference between backup and reads.
> I guess I can hope XtraBackup can help me with that too. I need to investigate this stuff further.
yes.
> I saw innodb_file_per_table option before and read something about it, but I will check it again to see if there's any advantage to use it.
The big advantage is when you need to DROP, ALTER, etc some big table. With ibdata1, the space is freed but not returned to the OS. With file_per_table (and separate .ibd "tablespaces"), the space is returned to the OS.
At a few GB, file_per_table does not matter; at 500GB, it could be important.
Loading a mysqldump while creating the Slave was the last "convenient" time to switch to file_per_table.
If your CentOS supports LVM, that is an even better way to take dumps, backups, etc. The 500GB directory tree can be dumped in _literally_ minutes. ("Magic")