Re: User defined fields in a schema
Split your "fields" into two categories.
1. Fields that are mostly common and critical to the application, plus those that are likely to be _useful_ for indexing.
2. All other fields. This includes 'fields' you decide to add in the 'future'.
Now build a single table with
* id AUTO_INCREMENT PRIMARY KEY (because it is usually useful),
* The fields from category 1 -- and apply the appropriate DATATYPEs. Decide on the indexes (usually 'compound' indexes) as you are formulating your SELECTs.
* For all the fields in category 2, add one more field to the table -- a JSON-encoded TEXT field.
Look up JSON. It is simple, human readable, well handled by all(?) languages, and, most importantly, provides an extensible key-value list.
If the table gets 'large', I like to compress the JSON (in the client) and store it in a BLOB. This can improve performance.
Now, you ask, "what if I want to filter on something that is in the JSON?"
Case 1: If it is one of several fields you are filtering on (eg, you want "WHERE date... AND price < ... AND obscure_field = ..."), then you filter on the category 1 fields, bring the potential rows into your programming language, decode the JSON, and test the obscure_field in your code. The primary filtering will probably give you enough performance.
Case 2: If that is the only field you are filtering on, then it should have been in category 1, and have a column by itself. If it is in the JSON, then it will be a costly SELECT, because you have to fetch all the rows and filter in your code.
In your case, these might be in category 1:
sitename,
postal_code (not the whole address),
id_company
Category 2:
the rest of the address,
accesscode,
contactnum,
need_photo_id_to_access,
and future fields
Note: As long as you are adding new fields to the JSON column, there is no need for ALTER TABLE ADD COLUMN.
For php, see json_encode() and json_decode() and, optionally, gzcompress() and gzuncompress().