5 Tips to Speed Up Your Rails App

Would you like to know how you can drop the request processing time for an action from 2 seconds to 0.2 seconds? With some excellent tools, some patience, and these tips, you too can speed up critical parts of your web application.

I recently had the opportunity to help a client (and all-around good guy) improve the efficiency of the API portion of his web application. His developer had done a great job with the logic and functionality, but they thought it would be a good idea to have me take a long look at the code to see what improvements I could make. Here are five tips you can use to speed up the slow requests in your Rails app.

Before we dive in, be advised that this kind of work isn't something you should be doing at the outset of your project. You won't know yet where you'll be able to have the most impact with your efficiency improvements. As with this client's application, first get the functionality built, then start doing some real-world testing (in this case with a small set of beta testers) and monitor your application to see where the pain points are.

Tip 1: Before you get started with the code, take a look at the environment.

My work was limited to just the API portion of this application, so the operating environment is large chunks of data coming in at sparse but regular intervals completely separate from the other moving parts of the application. The controller that handles the API requests is part of the Rails application, so API requests were coming in to the same pack of mongrels as web requests, sometimes slowing down interactive use of the app as large API requests were being handled. The first step was to fire up another pack of mongrels and create a new virtual host in the Apache configuration to point at this new pack (api.example.com) and then point the API clients at the new virtual host.

The next step was to take a look at the data coming in to the app: XML payloads ranging from 20Kb to 200Kb. Since this incoming data was used to create a number of records in the database, and since we could control both the generator and the consumer of the data, it made sense to switch to YML. This created both bandwidth savings and processing savings, as the YML could be sent directly to ActiveRecord without massaging (the original XML wasn't the kind of attribute hash AR can easily handle).

Tip 2: Consolidate and aggregate.

I'm a fan of the skinny controller fat model concept, so when I saw a loop in the controller that created a number of UserLog records for the current user, I immediately thought of moving that loop to the User model. Aside from simply cleaning up the controller, this also would reduce duplication, since the app now took either XML or YML as input. I also knew that moving the creation of the UserLog records into the User model would result in a huge performance gain, since the UserLog model was also making additions to other associations on the User model. This was an example of OOP compartmentalization being at odds with efficiency. So, instead of having each new UserLog instance manipulate the same User associations, I had each UserLog instance return the changes that should be made, had the User model collect all those changes into a batch, and then had the User model apply all those changes to the associations once.

Going through tips 1 and 2 exhausted all the easy options, and so far the performance gain was minor. Now it was time to bring in the big guns.

Tip 3: Use the right tool for the job.

Charlie Savage has given the world a wonderful gift in the form of the Ruby profiling tool, ruby-prof. He has also explained how to profile your Rails application using ruby-prof, and even given an example of profiling a Rails application with ruby-prof. I won't go into the details of using ruby-prof, since Charlie's blog posts do such a good job of that. Once you have ruby-prof showing you what the most expensive and frequent method calls are, you can start attacking them one by one.

Tip 4: Magic is not the friend of processing efficiency.

ActiveRecord's "magic" makes it so easy to get your business logic translated into code, and is a large part of the code-writing efficiency gains you get by using Rails. However, the cost of that magic is processing time, as ActiveRecord does a lot of work behind the scenes to make that magic happen. When you are using the methods provided by AR, such as "self.app = some_app" when your model "belongs_to :app", you are executing a fair amount of code. When you see "Kernel#clone" or "ActiveRecord::Associations::AssociationProxy#load_target" or "ActiveRecord::Associations::AssociationProxy#method_missing" near the top of ruby-prof's output, it's time to strip away some of the magic from your code. If you aren't going to be using the other methods from "self.app" later in your code (such as self.app.name, etc.), then replace "self.app = some_app" with "self.app_id = some_app.id" and avoid all the work AR does to make the other assignments you don't need.

Another quick fix is one of the tips from the Vroom blog post on getting "#_parse" out of your ruby-prof results. In this case I was dealing with datetimes from a MySQL database, so these definitions helped solve that problem:

def start_time @start_time ||= Time.parse(read_attribute_before_type_cast('start_time')) end

def end_time @end_time ||= Time.parse(read_attribute_before_type_cast('end_time')) end

Finally, to avoid expensive calls to method missing, stop calling those methods that don't exist! :) Instead of "App.find_or_create_by_name(name)", use "App.find(:first, :conditions => ['name = ?', name]) || App.create(:name => name)". It may not read as well, but it does perform better.

Tip 5: Know what's slow.

This goes back to the idea of knowing what the pain points are so you don't improve the wrong things, but this time on a more granular level. In this particular case, I could see that very little time was spent dealing with the database, even though there were a large number of queries being executed. As a result, reducing Ruby code and adding database queries helped improve the efficiency. For example, a change like the one in the example below can save you quite a bit of time, cutting out the expense of instantiating a whole lot of database results and iterating over them in favor of letting the database do the work.

# Expensive App.find(:all, :conditions => "group_id = 1").collect(&:name).include?(name) # Cheap App.count(:conditions => ["group_id = 1 and name = ?", name]) > 0

I hope this has been a useful list of tips for you as you take a look at improving the efficiency of your Rails application. It's not particularly sexy work, but it can be fun.

Shameless plug: If you'd like to have the benefits of efficient code without all the work, contact me to review your Rails app for efficiency and production readiness. I've helped a number of clients polish up their applications for prime time through code reviews and profiling, and I'd be happy to help you, too.