MySQL Forums
Forum List  »  Ruby

Ruby, Rails and MySQL gem ― through scaffolding, versioning, rake db:create, db:migrate ― HOW to implement and preserve your schema with SQL
Posted by: mike montagne
Date: March 23, 2010 06:16PM


As RoR initiates will immediately wonder how they're going to deploy table designs meeting usual SQL standards across scaffolding processes which lack even a power to express vital design concerns as we would with the usual CREATE TABLE statement, the following instructions provide the necessary pattern for implementing your intended SQL schema through development, testing, and deployment, while preserving the RAD advantages of scaffolding processes which otherwise seemingly preclude purposed schemas. This simple approach virtually eliminates any reliance on utilities; and it preserves intended SQL table designs across development, testing and deployment with a single line of code.

In other words, it resolves the necessary pattern for working to conventional SQL standards in RoR.


RoR (Ruby on Rails) "scaffolding" generates project skeleton code from a database design. Because it so efficiently generates any necessary framework for our eventual application, scaffolding is a fundamental tool of the RoR approach to RAD.

Scaffolding commands are issued from the command line. For instance, if we're working with MySQL, we would create the outer RoR MySQL skeleton for our application by a simple command: "rails myApp -d mysql". This command produces the basic framework for our dynamic web application, together with definitions for three versions of our database ― each named for development, testing ("test"), and production respectively, concatenated from the myApp parameter of the rails command.

Subsequent "scaffold" commands generate somewhat crude definitions for each table of our databases, together with further skeleton code to manage and interface the table. This further skeleton code will support generic RoR processes, which we then customize into a finished application. The important thing to understand insofar as these instructions are concerned, is that these scaffold commands just generate a crude definition of each table. They do not create the table. We're going to take advantage of this to intervene on the process at this point to create and preserve our tables with conventional SQL CREATE TABLE statements.

For example, a scaffold command to generate a definition for an amendments table might be:

script/generate scaffold amendment c_code:string c_name:string tran_id:integer priority:integer

Issuing this command from the command prompt generates the further skeleton code to support generic operations on this particular table of our development, test, and production databases. Again, it does not create the table. It creates a crude definition of the table which will subvert many design purposes. Tables are created later by a "rake db:migrate" command. The purpose of these instructions is to overcome the design limitations imposed by the crude "database agnostic" table definitions of scaffold processes.

Although it does generate skeleton code which operates on the fields we have crudely defined by scaffolding, this table-defining juncture of the skeleton generating phase provides us a necessary opportunity to plant a line of code in each table's Create[Tablename] self.up method. This one line of code creates the table instead by our usual SQL CREATE TABLE statement, ensuring every conceivable purpose of good schema designs, as opposed to tolerating the database agnostic dilutions of scaffold table definitions. Thus our basic process likewise preserves the advantages of scaffolding for RAD generation of our skeleton modules. Once we have defined tables with scaffold commands, we simply modify each table's create method. Then we continue development in the usual manner.

Thus, as in all the usual subsequent processes, we first create *our database* (just a home for our tables) with the usual "rake db:create" instruction. We can then create our tables from whatever kind of definition exists in each table's create method. The tables are created (or altered) by issuing "rake db:migrate". If we hadn't modified each of our table create methods, the usual "rake db:migrate" function would create our tables from the crude scaffold definition which initially resides in create self.up. Instead, because we have planted a comprehensive SQL CREATE TABLE statement in every table create method, our migrate functions will always, always, always create our tables from a far more powerful, articulate, and optimization-capable SQL approach.

Thus after our modification of the create method, all our tables are always created and altered by the conventional "rake db:migrate" instruction. The undesirable scaffold definition of our table is gone for good; it can never affect our work; and except for the brief pause to modify the create method as instructed, our work proceeds in all the usual manners.

To integrate our tables with RoR philosophies however, we must further understand that "script/generate" scaffold commands automatically define three additional fields for each of our tables: an integer field named "id" and made the primary key; together with two fields which the resultant .rb file refers to as "timestamps". The latter are actually DATETIME field types, named "created_at" and "updated_at" respectively. The SQL statement we plant in the create methods of each table's .rb file must perform this responsibility itself, because these fields are required by skeletonized application functions.

Scaffold commands create a dated .rb file for each table definition. These files, named (for versioning purposes) [DateTimeExpr]_create_[TableName].rb for each table, will be generated in a [project]/db/migrate/ directory. It is in each of these files that we find the table create method for each of our tables, to perform the negligible task of replacing the auto-generated code with our necessary SQL CREATE TABLE statement.

To summarize... after we plant our SQL CREATE TABLE statement in each of these files, the three versions of our database are created according to the usual pattern. We create a home for our database by issuing "rake db:create". Then our tables can be created and altered as necessary by issuing "rake db:migrate".


RoR scaffolding doesn't support a multitude of native MySQL field types which are vital to good table designs, efficient resource usage, and efficient throughput. In the database agnostic scheme of scaffolding, highly purposed field types ― TINYINT, SMALLINT, MEDIUMINT, BIGINT, DOUBLE, TINYBLOB, TINYTEXT, MEDIUMBLOB, MEDIUMTEXT, LONGBLOB, LONGTEXT, ENUM, SET... etc. ― go by the wayside, because the agnostic capacities of scaffolding force us to translate our schema's objectives into generic, broader types.

For instance, we might usually create a given table with an SQL command:

CREATE TABLE amendments (id integer NOT NULL AUTO_INCREMENT PRIMARY KEY, c_code varchar(2), c_name varchar(35), tran_id integer, priority tinyint, created_at datetime, updated_at datetime, INDEX (id), FOREIGN KEY INDEX_tran_id (tran_id) REFERENCES translations (id) ON DELETE SET NULL ON UPDATE NO ACTION) ENGINE=InnoDB;

Unfortunately, scaffolding limitations compromise this command to the form of:

script/generate scaffold amendment c_code:string c_name:string tran_id:integer priority:integer.

Yes, the creation of the three id, created_at and updated_at fields is automated; but as you see, our intended varchar(2) field is degraded to a "string" type, which ultimately translates to varchar(255).

Obviously, this degradation has potentially huge negative ramifications in our eventual implementation. Disk usage, throughput, and processing are redundantly degraded to huge degrees if we don't rectify the degraded definition for our tables to our purposed SQL schema definition.

Fortunately however, we can accomplish all the objects of preserving our purposed schema with a single line of code. Not finding models for the necessary procedure; instead finding resistance to "prematurely optimizing" my approach to RoR (!); I thought it a good idea to simplify these issues for a possible minority (?) of coders who are likewise concerned with record design.


1. Having already installed the MySQL gem (,359591,359591#msg-359591), we first create our MySQL project from the command line in terminal, according to usual convention:

rails myApp -d mysql

2. We next create our table definitions (.rb files in [project]/db/migrate/). For example:

script/generate scaffold amendment c_code:string c_name:string tran_id:integer priority:integer

3. BEFORE we create our databases or tables with "rake db:create" and "rake db:migrate" commands however, we now revise the table definitions in each .rb file:

3.a. Our "script/generate scaffold" command resulted in the following .rb file:

class CreateAmendments < ActiveRecord::Migration
def self.up
create_table :amendments do |t|
t.string :c_code
t.string :c_name
t.integer :tran_id
t.integer :priority


def self.down
drop_table :amendments

Yes. That's all we'll find there.

3.b. We want to revise the up method to create our table with actual SQL, using of course the very SQL we would invoke in observation of optimal practice:

class CreateAmendments < ActiveRecord::Migration
def self.up
execute "CREATE TABLE amendments (id integer NOT NULL AUTO_INCREMENT PRIMARY KEY, c_code varchar(2), c_name varchar(35), tran_id integer, priority tinyint, created_at datetime, updated_at datetime, INDEX (id), FOREIGN KEY INDEX_tran_id (tran_id) REFERENCES translations (id) ON DELETE SET NULL ON UPDATE NO ACTION) ENGINE=InnoDB;"

def self.down
drop_table :amendments

As you see, we remember to create the necessary id, created_at, and updated_at fields ourselves.

Although it is desirable to set foreign keys this way, there's one caveat to doing so when your foreign key refers to a table which hasn't been created yet: the MySQL engine won't let us do this until the referenced table exists (you'll get a MySQL errno: 150). So, you either have to create your referenced tables first, or remove your foreign key references until all your tables are created. Once you've created the tables, you can restore the references, perform a migrate, and you're fine thereafter. In the case of later migrating your tables to test and production databases, you will have to take the same precautions of creation order. Comment out the execute statements of foreign keyed tables until the reference tables are created; whatever.

So, this specification, or any further modifications to this specification, will now preserve our intended table design across all further methods.

Should we modify the design schema, this is where we do it. In the course of further development, after further modifying a table design in this up method, you run "rake db:migrate" to migrate the altered design across your development environment and/or deployment.

But of course, as we're just now building the application and its databases, we don't do this until we've completed the following steps.

4. Now we create our development database: rake db:create

5. And finally we create our tables: rake db:migrate

That's all there is to it.

Obviously, we restrict ourselves to incoherent designs without the SQL create statement in self.up. So, even for your first project, you want to get this under your belt, to never, never, never settle for anything less than optimal table designs ― rendered, migrated, and preserved by a single line of code.


mike montagne

(C) Copyright, mike montagne, March 23, 2010

Edited 8 time(s). Last edit at 03/25/2010 09:36PM by mike montagne.

Options: ReplyQuote

Written By
Ruby, Rails and MySQL gem ― through scaffolding, versioning, rake db:create, db:migrate ― HOW to implement and preserve your schema with SQL
March 23, 2010 06:16PM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.