While on the one hand, ORMs make up most of the published best practices for interfacing with databases, (see the success of DBIx::Class, and many other similar products), on the other hand, there is a certain amount of backlash against them, eg ORM is an Antipattern.

I would add two further critiques of ORMs. First, most people use ORMs as their model layer (every Catalyst project I’ve seen does this, for instance), rather then implementing their model layer using the ORM as simply the database access layer. Using the ORM as your model layer violates abstraction— it ties your business logic intimately to the structure of your data. It makes you think of your data in terms of nouns defined by how the database itself is laid out. Instead, you should be thinking in terms of verbs— what do you want to do, not what do you want to do it to. That is, ask me how many users are online, not how many rows in the user table have the online column set to 1.

Second, the vast majority of ORMs require you to write your migration scripts by hand. You have to define, using the ORMs mini-language, what’s changed between two versions of your application. In fact, the only major exception to this that I’m aware of is DBIx::Class.

However, even given the downsides, they do have advantages. To me, they provide two main advantages over writing a model that uses SQL directly:

First, they solve the migration problem. You say “please update my database” and it goes out and reads your classes and updates your database to match them.

Second, they provide easy CRUD. You get easy user friendly methods for doing basic data manipulation.

You may notice that I didn’t include cross-database support. While this seems really nice at first, in my experience most projects will only ever run on one database. The main exception to this in the Perl world is running the test suite against SQLite. The model I suggest doesn’t quite support that, but in theory could be extended to allow for it.

While I do think the arguments against ORMs are often compelling, what they don’t do is provide alternative solutions to the problems that ORMs do solve well. So what I want to do here, is propose an alternative to the ORM model of database access.

Examples below are in MySQL’s dialect of SQL.

Migration and the Schema

You declare your schema as a series of SQL files, each containing CREATE statements. To migrate, you use a tool that parses the SQL and updates your database, much like class based migration tools would.

Example Schema1:

CREATE TABLE person (
   id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   name VARCHAR(40)
);

Example Schema2:

CREATE TABLE person (
   id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   name VARCHAR(40) NOT NULL,
   addr VARCHAR(200)
);

Migrating from Schema1 to Schema2 would run:

ALTER TABLE person
--  BEFORE name VARCHAR(40)
    MODIFY name VARCHAR(40) NOT NULL,
    ADD addr VARCHAR(200)

Migration is done either by comparing your schema directly with a database, or by comparing two different versions of schema. You either feed it directly to the database with a tool, or send the SQL to a file for vetting.

CRUD

A tool reads the SQL for table definitions and generates a series of stored procedures for basic data manipulation. This can be rerun if you want to regenerate these procs after changing the table.

Another tool reads any stored procedure definitions and generates a class for calling them. The procs have extended metadata to allow the helpers to return data rather then statement handles… eg an array of rows, a single value, a single row, a hash with key and value taken from two columns, etc.

The first tool, on Schema2 would create something like (along with other methods to delete and fetch rows):

CREATE PROCEDURE create_person(
    p_name VARCHAR(40),
    p_addr VARCHAR(200) )
RETURNS COLUMN id -- Note, special metadata here
MODIFIES SQL DATA
BEGIN
    INSERT INTO person ( name, addr ) VALUES ( p_name, p_addr );
    SELECT last_insert_id() AS id;
END

CREATE PROCEDURE update_person( 
    p_id BIGINT, 
    p_name VARCHAR(40), 
    p_addr VARCHAR(200) )
ARGS AS HASH -- More special metadata
MODIFIES SQL DATA
BEGIN
    UPDATE person SET name=COALESCE(p_name,name) addr=COALESCE(p_addr,addr) WHERE id=p_id;
END

And the second tool would create something like:

package MyProject::DB::Person;
use Any::Moose;
has dbh => (is=>'rw', isa=>'DBI::db', required=>1);
sub create {
    my $self = shift;
    my( $p_name, $p_addr ) = @_;
    my $sth = $self->dbh->prepare( "CALL create_person(?,?)" );
    $sth->execute( $p_name, $p_addr );
    $row = $sth->fetchrow_hashref;
    return $row->{'id'};
}
sub update {
    my $self = shift;
    my( $args ) = @_;
    my $sth = $self->dbh->prepare( "CALL update_person(?,?,?)" );
    $sth->execute( $args->{'id'}, $args->{'name'}, $args->{'addr'} );
}

From your software

You would then use the helpers to call these stored procs. As your application becomes more complicated you would likely want to write your own procs and subclass the generated helper class to make the helpers smarter. Again, in the same model as you would use with, for instance, DBIx::Class.

The example below assumes further helpers to fetch a record and delete a record.

my $person = MyProject::DB::Person->new(dbh=>DBI->connect(...));
my $pid = $person->create( "Alice", "123 Chestnut St" );
my $alice = $db->get($pid);
$alice->{"addr"} = "456 Fun Ave";
$person->update( $alice );
$person->delete( $alice->{'id'} );

Next post: Localizing Variables in Coroutines

Previous post: MOPing with Moose