Subscribe to our newsletter
Get the latest news and industry insights in your inbox.
Subscribe to our newsletter
Thanks for subscribing
This blog post is adapted from a talk I gave at Austin on Rails.
Ruby on Rails is a powerful, user-friendly web framework that allows developers to rapidly build applications. Its wide popularity is largely due to “the Rails way,” aka convention over configuration. Scaling Rails apps used to really suck (Twitter Fail Whale, anyone?), but we’ve come a long way.
Caching is one strategy that helps ease scaling pains that I often see Rails developers overlooking. Starting out with caching can be confusing, because terms and documentation can be convoluted, especially if you’re not an expert.
In this blog, the first of a two-part series on accelerating Rails, I’ll discuss caching options that come built-in with Rails and best practices for their effective use. In part two, I’ll cover dynamic edge caching and integration with Fastly’s acceleration platform.
Before digging in, it’s important to understand the distinction between types of cacheable content on the web.
Static content refers to web objects like images, scripts, and stylesheets — content that doesn’t change often, and when it does, you can typically control the changes.
Dynamic content, on the other hand, includes web objects like JSON or HTML that change frequently, usually because of end-user changes or interaction with apps. In Rails, you can manage static content with the asset pipeline and use Fragment caching to cache dynamic HTML.
The built-in Rails caching options include:
More details about each one of these can be found in the Rails Caching Guide.
Rails provides a SQL query cache, used to cache the results of database queries for the duration of the request. This is enabled by setting the following options in your appropriate
config/environments/*.rb file (usually production.rb).
config.action_controller.perform_caching = true config.cache_store = :mem_cache_store, "cache-1.example.com"
The caveat of query caching is that the query results are only cached for the duration of that single request; it does not persist across multiple requests. This is pretty unfortunate, since query caching between requests could be more beneficial. However, you can implement query caching across requests yourself using the standard
Rails.cache interface. For example, you could create a class method on a model that caches the query results:
class Product < ActiveRecord::Base def self.out_of_stock Rails.cache.fetch("out_of_stock", expires_in: 1.hour) do Product.where("inventory.quantity = 0") end end end
One thing to note is that you should set up a
cache_store that works for you. The default store is disk storage, which might slow down accesses if you put lots of objects in your cache. Disk storage is slow to access, anyway.
When I talk about caching to a big group, I poll the audience to get a sense of what caching strategies they use. I have literally only met one person out of hundreds who used Rails Page or Action caching in production. This lack of use probably explains why Page and Action caching were removed from Rails 4 core and extracted into its own caching gem(s). On top of that, most of the recent caching work in Rails has been on Fragment caching. Because they have been removed from Rails core, I’m not going to go into depth on their use. However, in part two, I’ll talk about some parallels between action caching and implementing dynamic edge caching.
Since Rails 3.1, the Asset Pipeline has provided a highly useful, easy-to-use tool that makes dealing with static content (and static content caching) quite simple. I recommend using the following settings in the appropriate
config.serve_static_assets = false
Offload static asset serving to Nginx or Apache by disabling the Rails server from serving assets and content in
/public. Nginx or Apache is much more performant handling static files than your Rails server.
config.assets.css_compressor = :yui config.assets.js_compressor = :uglifier
config.assets.compress = true, but Rails 4 now requires you to explicitly set your JS and CSS compressors. The
config.assets.digest = true
Asset Digests are an easy way to avoid dealing with cache invalidation when static content changes. I highly recommend you turn this option on.
config.action_controller.asset_host = "http://cdn.myfastsite.com"
Serve your assets from a CDN. This is pretty self explanatory.
config.static_cache_control = "public, s-maxage=15552000, max-age=2592000"
Use proper Cache-Control headers. More on this later.
Taking full of advantage of the Asset Pipeline configuration options will improve static asset load times, increasing end-user happiness and experience.
Let’s quickly discuss the Rake::Deflater middleware included with Rails, which is enabled in your application.rb file.
# in application.rb module FastestAppEver class Application < Rails::Application config.middleware.use Rack::Deflater end end
This middleware will compress (using gzip, deflate, or another Accept-Encoding value) every response that leaves your application. I highly recommend using this to speed up delivery and reduce bandwidth in your applications. More info available on the Thoughtbot Rake Deflater blog.
Fragment caching is how you can cache dynamic HTML inside your Rails applications. Fragment caching exposes a
cache method that is used in view templates like this:
# products/index.html.erb <% cache(cache_key_for_products) do %> <% Product.all.each do |p| %> <%= link_to p.name, product_url(p) %> <% end %> <% end %>
Wrap an arbitrary piece of HTML in this
cache tag to cache it in the CacheStore that you set up. You can also get more complex and arbitrarily nest
cache tags. This is commonly referred to in the Rails community as Russian Doll caching. More details are provided by DHH in his blog post.
# products/index.html.erb <% cache(cache_key_for_products) do %> All available products: <% Product.all.each do |p| %> <% cache(p) do %> <%= link_to p.name, product_url(p) %> <% end %> <% end %> <% end %>
I think Fragment caching is the best addition to the Rails caching techniques, and I’ve seen more more Rails devs use this than page or action caching. I find Russian Doll caching confusing and have actually avoided implementing it in the past because of the strange cache key scheme. Plus, it is extremely difficult to work with view templates in legacy Rails apps.
The Rails Russian Doll cache key format is actually quite clever and makes a lot of sense if you are familiar with memcached. The id/timestamp cache key format is a way to get around the fact that memcache doesn’t support wildcard purging.
Instead of thinking that cache keys always map to the same content, and that every key maps to a unique piece of content, I like to think of cache keys as mapping to the most up-to-date content (much like a database primary key). I find it easier to reason about complex caching strategies this way, given that your cache supports a fast enough invalidation mechanism. I’ll talk about cache keys more in-depth in part two.
One last thought about Fragment caching: it’s targeted primarily at dynamic HTML. With the removal of page and action caching, Rails now lacks a good built-in mechanism for dynamic API caching. I’ll cover dynamic API caching in part two.
Rails does a really good job of abstracting HTTP Headers. This is great, until you need to interact with them at a more fundamental level. Here’s an explanation of common HTTP Headers that affect caching and some tips for best practices.
In the asset pipeline Cache-Control example above, I used a pretty long string with a couple different “directives.” What do all of these mean?
Please cache for the time specified. This can also be private, which means do not cache.
The length of time for a piece of content to be cached. This applies to all caches unless otherwise stated, and I typically think of max-age as applying to the browser. Choose this value based on what makes sense for your application. Base this value on how often things change. For example, if you only update my CSS twice a year, it’s probably safe to cache this for at least a few weeks.
The length of time for the content to be cached in proxy caches, i.e. CDNs or memcached.
By using two different values for max age (one for the browser and for the CDN), you can ensure your end users always have the most up-to-date content and you can maximize the time the content lives in the CDN, which minimizes requests back to your application server. Check out Section 14.9 in RFC 2616 for a full list of Cache-Control directives.
The ETag HTTP Header is provided to determine if content has changed and needs to be updated. Rails automatically adds ETag headers into responses with the Rack::ETag middleware. At a high level, ETags enable Rails to serve 304 not modified responses when end-user data does not need to be updated. Unfortunately, to be able to do this, Rails must still render the response every time to generate the ETag, which is lame for performance. However, Rails does provide a way to override this behavior using Conditional GETs with the
Vary is used to change a response based on the value of another HTTP Header. This is best explained by example. Let’s take a Vary header with Accepting-Encoding as the value
This response can be different based on the value of the Accept-Encoding header. For example, the value of Accept-Encoding can either be gzip or deflate.
Accept-Encoding: gzip => Response A (gzip encoded) Accept-Encoding: deflate => Response A' (deflate encoded)I learned my very simple Vary header best practices from Steve Souders, whom I’ve had the pleasure of working with at Fastly. His advice? Don’t Vary on anything except Accept-Encoding/Content-Encoding.
Something you never want to do is Vary on the User-Agent header. This can sound quite convenient, especially if you need to serve different sized images to mobile clients. But, there are thousands of user agents. Varying on thousands of different responses limits caching benefits, namely because it can be impossible to serve the same response more than once, since there is so much variation in the value of UA.
Check out Accelerating Rails part two, where I’ll talk about how to integrate edge caching into your apps.
Accelerating Rails, Part 2: Dynamic HTTP Caching
In the second part of our series on accelerating Rails, I’ll cover configuration of a few Fastly features, Varnish and Varnish Configuration Language (VCL), and strategies for caching dynamic content that are targeted towards the…
Normalizing the Host Header
In the continued quest to increase cache hit ratios, the chant is: “Normalize, normalize, normalize.” Less variation in your requests means you have a higher chance of getting hits. This month’s highlight is the Host…
Using ESI, Part 2: Leveraging VCL and ESI to Use JSONP
In this post, I’m going to discuss how you can leverage ESI and VCL (Varnish Configuration Language, the domain-specific language that powers Fastly’s edge scripting capabilities) to use JSON responses, even when they’re loaded from…