The Joy of Prefetch

If you heard me speak at YAPC or you’ve had any kind of conversation with me over the last few weeks then it’s likely you’ve heard me mention the secret project that I’ve been writing for my wife’s school.

To give you a bit of background, there’s one afternoon a week where the students at the school don’t follow the normal academic timetable. On that afternoon, the teachers all offer classes on wider topics. This year’s topics include Acting, Money Management and Quilt-Making. It’s a wide-ranging selection. Each student chooses one class per term.

This year I offered to write a web app that allowed the students to make their selections. This seemed better than the spreadsheet-based mechanisms that have been used in the past. Each student registers with their school-based email address and then on a given date, they can log in and make their selections.

I wrote the app in Dancer2 (my web framework of choice) and the site started allowing students to make their selections last Thursday morning. In the run-up to the go-live time, Google Analytics showed me that about 180 students were on the site waiting to make their selections. At 7am the selections part of the site went live.

And immediately stopped working. Much to my embarrassment.

It turned out that a disk failed on the server moments after the site went live. It’s the kind of thing that you can’t predict.But it leads to lots of frustrated teenagers and doesn’t give a very good impression.

To give me time to rebuild and stress-test the site we’ve decided to relaunch at 8pm this evening. I’ve spent the weekend rebuilding the app on a new (and more powerful) server.

I’m pretty sure that the timing of the failure was coincidental. I don’t think that my app caused the disk failure. But a failure of this magnitude makes you paranoid, so I spent a lot of yesterday tuning the code.

The area I looked at most closely was the number of database queries that the app was making. There are two main actions that might be slow – the page that builds the list of courses that a student can choose from and the page which saves a student’s selections.

I started with the first of these. I set DBIC_TRACE to 1 and fired up a development copy of the app. I was shocked to see the app run about 120 queries – many of which were identical.

Of course I should have tested this before. And, yes, it’s an idiotic way to build an application. But I’m afraid that using an ORM like DBIx::Class can make it all too easy to write code like this. Fortunately, it makes it easy to fix it too. The secret is “prefetch”.

“Prefetch” is an option you can pass to the the “search” method on a resultset. Here’s an example of the difference that can make.

There are seven year groups in a British secondary school. Most schools call them Year 7 to Year 13 (the earlier years are in primary school). Each year group will have a number of forms. So there’s a one to many relationship between years and forms. In database terms, the form table holds a foreign key to the year table. In DBIC terms, the Year result class has a “has_many” relationship with the Form result class and the Form result class has a “belongs_to” relation with the Year result class.

A naive way to list the years and their associated forms would look like this:

Run code like that with DBIC_TRACE turned on and you’ll see the proliferation of database queries. There’s one query that selects all of the years and then for each year, you get another query to get all of its associated forms.

Of course, if you were writing raw SQL, you wouldn’t do that. You’d write one query that joins the year and form tables and pulls all of the data back at once. And the “prefetch” option gives you a way to do that in DBIC as well.

All we have done here is to interpose a call to “search” which adds the “prefetch” option. If you run this code with DBIC_TRACE turned on, then you’ll see that there’s only one database query and it’ll be very similar to the raw SQL that you would have written – it brings back the data from both of the tables at the same time.

But that’s not all of the cleverness of the “prefetch” option. You might be wondering what the difference is between “prefetch” and the rather similar-sounding “join” option. Well, with “join” the columns from the joined table would be added to your main table’s result set. This would, for example, create some kind of mutant Year resultset object that you could ask for Form data using calls like “get_column(‘’)”. [Update: I was trying to simplify this explanation and I ended up over-simplifying to the point of complete inaccuracy – joined columns only get added to your result set if you use the “columns” or “select/as” attributes. And the argument to “get_column()” needs to be the column name that you have defined using those options.] And that’s useful sometimes, but often I find it easier to use “prefetch” as that uses the data from the form table to build Form result objects which look exactly as they would if you pulled them directly from the database.

So that’s the kind of change that I made in my code. By prefetching a lot of associated tables I was able to drastically cut down the number of queries made to build that course selection page. Originally, it was about 120 queries. I got it down to three. Of course, each of those queries is a lot larger and is doing far more work. But there’s a lot less time spent compiling SQL and pulling data from the database.

The other page I looked at – the one that saves a student’s selections – wasn’t quite so impressive. Originally it was about twenty queries and I got it down to six.

Reducing the number of database queries is a really useful way to make your applications more efficient and DBIC’s “prefetch” option is a great tool for enabling that. I recommend that you take a close look at it.

After crowing about my success on Twitter I got a reply from a colleague pointing me at Test::DBIC::ExpectedQueries which looks like a great tool for monitoring the number of queries in your app.

Send to Kindle

DBIC Training in Granada

It’s been a while since I’ve run a training course alongside a YAPC. By my calculations, the last time was Riga in 2011. But I’ve been talking to the organisers of this year’s conference and we have plan.

I’m going to be running a one-day introductory course on DBIx::Class before the conference (I think it’ll be on 1st September, but that’s not 100% certain yet). Full details are on the conference web site. There’s an early-bird price of 150 Euro and the full price is 200 Euro. The web site says that the early-bird price finishes today, but I wouldn’t be at all surprised if that gets extended for a few days at least.

Of course, readers of this blog will all already be experts in DBIC and won’t need this course. But I’m sure that most of you will have a colleague who would benefit from… well… a refresher on who DBIC works. Why not see if your company will pay for them to attend the course :-)

The course size is limited. So you might want to think about booking soon.

Hope to see some of you in Granada.

Two updates:

  1. The date has now been confirmed as 1st September.
  2. The early-bird pricing has been extended until 1st June.
Send to Kindle


Three times in the last few months I’ve had the “DBIC or raw DBI” discussion. People have told me that they don’t use DBIC because raw DBI is better. And each time, the person promoting DBI in the discussion has used an argument that boils down to “DBIC is probably useful for people who don’t know SQL very well”.

I find that argument really puzzling. Not least because I like to think that I know more than a little about SQL. SQL is a skill that that has run through my career for longer than Perl. I’ve been using SQL since I left college in 1988. I only started using Perl in about 1996. And yet, although I still consider myself a bit of an expert in SQL, I use DBIC for pretty much all of the database work I do these days – and have done for about five years.

I use DBIC not because I don’t understand SQL. I use it because it makes my life easier. I use it because it frees up some of the time I used to spend dealing with the minutia of database communication so that I can spend it working on other, more interesting, parts of my project.

When I’m running training courses that introduce DBIC I have slide that is entitled “SQL is Boring”. It’s a joke of course but, like all the best jokes, it gets a laugh because there’s more than a little truth in it. There are, of course, many interesting SQL problems. I’ve spent many an enjoyable (if slightly frustrating at the time) hour trying to coax the right data out of complex query with correlated subqueries, outer joins and aggregated functions. But that’s the exception rather than the rule.

The vast majority of the SQL I write for the applications I work on is incredibly boring. It’s  boring because it’s all so similar. You get the data to present a list of objects to the user. The user selects the object they’re interested in, so you select all the data about that object. You might select some data about related objects. The user changes some of that data, so you update that row in the database. On a good day, you might delete an object from the database. Or insert a new one. Most of the SQL you need is like that. It’s boring.

We have computers to do the boring work for us. So let the computer generate all that boring SQL. Free up your time to work on the gnarly and interesting problems.

But that’s not the only advantage of using DBIC (or some other ORM). Think about the data that you get back from the database. The data you get back from a DBI call is an array. Or perhaps a hash. Or maybe a multi-dimension data structure if you’re using one of DBI’x more complex fetch() methods. But it’s still a dumb variable. From DBIC, I get an object. An intelligent variable. A variable that knows how to react to various messages. A variable that will save any changes back to the database automatically without me having to worry about where it came from and making sure that I’m writing it back to the right place.

I’m not saying for a second that there’s no place for DBI any more. For a start, DBI underlies DBIC so it’s still a vital part of our toolkit. And of course I still use it for quick one-off scripts. But when those scripts are still hanging round being used and maintained three months later (as they always are) I’ll rewrite them to use DBIC.

If you want to write large applications that are going to be easy to maintain and extend, then you should really be using DBIC (or something similar). I don’t care how well you know SQL. DBIC will make your life easier.

I’ve just mentioned a couple of reasons why I think that DBIC makes my life easier. I’m sure I’ve missed important stuff. What do you think? Why do you use DBIC instead of DBI?

Send to Kindle

Modern Perl in Linux Format

A couple of times, I’ve complained here about the standard of Perl articles in the British magazine Linux Format.

Following the second of those articles I got into a discussion with Graham Morrison, the editor of the magazine and he offered me the opportunity to improve matters by writing my own short series of tutorials for them.

The first of those tutorials appears in issue 151 of the magazine which will be appearing in UK newsagents about now.

The series is called “Modern Perl” (in an attempt to distance it from their earlier tutorials) and the first article is about how easy it is to write a database application using DBIx::Class. The second article will take the same database and build it into a simple web application using Dancer. That will hopefully be in issue 153 (skipping an issue). There will probably be a third article in the series which will add features to the web application.

I’ll find out what my rights are to the articles and hopefully I’ll be able to put them on the web at some point in the future.

If you see a copy in your newsagents then please consider picking it up. And if you enjoy the article, then please let the magazine know.

Send to Kindle