CDN vs Caching: What is the Difference?

The following post was adapted from an article Doc wrote for O’Reilly Radar.

Since a CDN is essentially a cache, you might be tempted to avoid complexity by not making use of the browser cache. However, each cache has its own advantages that the other does not provide. In this post, I’ll explain the advantages of each, and how to combine the two for the most optimal website performance — and the best experience for your end users.

What is Cache Busting?

Cache busting is the process of uploading a new file to replace a file that already exists and is cached. Cache busting is helpful because it prevents the browser from retrieving the old file that you are replacing.

Why Use CDN Cache Browsing?

While CDNs do a good job of delivering assets very quickly, they can't do much about users who are out in the boonies and barely have a single bar of reception on their phone. As a matter of fact, in the US, the 95th percentile for the round trip time (RTT) to all CDNs is well in excess of 200 milliseconds, according to Cedexis reports. That means at least 5% of your users, if not more, are likely to have a slow experience with your website or application. For reference, the 50th percentile, or median, RTT is around 45 milliseconds.

So why bother using a CDN at all? Why not just rely on the browser cache?

  1. Control. With most CDNs, you have the option to purge your assets from their cache, something that's very useful when you make changes to your assets. You do not have this option with the browser cache.

  2. First impressions matter. The browser cache doesn't help at all the first time a user visits your site, since it's cold (empty of useful objects). CDNs make the user experience with a cold browser cache as fast as possible, and a warm browser cache will make consecutive pages even faster.

  3. CDNs still have a geographical advantage. Even if a user falls in the 95th percentile, a 250 millisecond RTT is still better than a 350 millisecond RTT. Especially when you consider that every asset will take at least one round trip on an open connection, but you need an additional round trip to open said connection. Transport Layer Security (TLS — formerly referred to as SSL) adds at least one more round trip per connection, sometimes two. All these extra round trips stack up fast.

  4. CDNs are much more efficient, since their cache is shared between all your users. This means less load on your origin servers, without the need for a caching layer of your own in your platform.

How To Use Both CDN and Browser Cache

Now that we’ve determined a CDN is still important and that the browser cache is also quite valuable, here are two approaches to combine the two:

  1. A short time to live (TTL) for the browser cache, combined with long TTLs and purging on the CDN side.

  2. A long TTL for both the CDN and the browser cache, but with version-based cache busters.

Below, I’ll go through each approach in detail and discuss how you can even use a combination of the two.

Short & long TTLs

Ideally you would use a TTL for the browser cache that covers a whole visit, but not much more. That way, your users have speedy page loads throughout their visit, but don’t end up with outdated assets when they return later. Your analytics tools should be able to tell you what the visit times look like for your site, but generally 5 or 10 minutes is a good ballpark.

On the Fastly side, I would recommend using something like a month or a year as the TTL, and set up purging as part of your deploy pipeline so Fastly will be able to serve the new versions as soon as possible. For more information on purging, see https://docs.fastly.com/guides/purging/.

Keep in mind: don’t send the purge command until the new versions of the assets you're updating are guaranteed to be served by the origin. I've witnessed cases where someone sent a purge to Fastly, and Fastly immediately retrieved the file before the file was done being synced to the origin servers, which led to a lot of confusion.

To set the browser cache TTL, use a Cache-Control header in your origin response. Then use either the Surrogate-Control header, or an override in your Fastly configuration, to set the TTL there.

Here’s an example of the former:

Cache-Control: max-age=600
Surrogate-Control: max-age=31536000

In the above example, the Cache-Control header will instruct the browser to cache for 10 minutes, and it tells Fastly to cache it for a year. The Surrogate-Control header is stripped before the response is sent to the client, hiding this implementation detail from users. See our docs for more information about which headers you can use to control TTLs with Fastly.

Version-based cache busters

Despite the name, cache busters can actually improve caching when used wisely. Basically a cache buster is a new query string parameter that is added to the URL. Where web servers and most application servers simply ignore query string parameters that they're not interested in, caches have to assume that any difference in the query string will have influence on the result. That includes the browser cache. Browsers have to treat each of the following as three unique objects, even if the server returns the exact same response for all of them:

https://www.example.com/css/main.css, https://www.example.com/css/main.css?cb=foo, and https://www.example.com/css/main.css?foo=bar

Say your app is on build 133. Instead of linking to https://www.example.com/css/main.css your app would link to https://www.example.com/css/main.css?v=133. When you make build 134, you link to https://www.example.com/css/main.css?v=134 instead. Now the browser has to fetch this new version of main.css, even though it still had an unexpired copy.

Common practices are to use either a (short) hash of the file content, a build number, or commit hash.

My personal preference is to use a hash of the file content. That way, the cache buster only changes when the content changes. With build numbers or commit hashes, the cache buster changes whenever anything changes. However, since all the URLs of assets on your site have to have the cache buster, it can be easier to use the build number or commit hash.

The downside to this approach is having to update all of the asset URLs for each change. The upside is that you can have your assets cached in the browser with nice long TTLs, and still have the browser fetch fresh assets when its cache is out of date.

Mix and match

Because Fastly’s CDN enables you to purge outdated content within 150ms, you should consider caching all of your HTML pages as well. However, even if you can re-engineer your site to use cache busters for assets like images, stylesheets, and JavaScript, you should never use cache busters on URLs for the actual pages themselves — search engines like Google will penalize you for having query strings in page URLs.

So when caching pages, consider using the short and long TTL technique for your pages, and cache busters for the assets used by said pages.

Advanced bits: revalidation

A very nice side effect of using the browser cache is that browsers will keep objects around even if they're expired, and send additional revalidation headers with their requests. If the object being requested has not changed, your CDN can simply respond with a 304 Not Modified status, which has no body, telling the browser it can use the expired object and optionally provide a new TTL.

While each request that gets a 304 response still takes a round trip to complete, the lack of the response body means quite a bit of bandwidth savings. Not only is that a benefit to users on slow connections, it might also reduce your monthly CDN payments.

To make revalidation work, all you have to do is make sure your origin includes a Last-Modifiedor ETag header in its responses. The good news is that most web servers already include Last-Modified and ETag headers for any static files they serve from disk. The value of the Last-Modified header is based on the file's modification time. The value of the ETag header is based (in Apache) on modification time, inode number, and size.

When a browser notices one of these two headers, or both, on expired objects in its cache, it will add an If-Modified-Since header with the value of Last-Modified and add an If-None-Match header with the value of ETag. An object is considered unchanged if the values of If-None-Match and ETag are the same, and if the value of If-Modified-Since either matches or is after the value of Last-Modified.

If you have a single web server for your static assets, you probably already have revalidation working perfectly due to common defaults.

However, if you use multiple web servers for redundancy, and those servers each have local storage instead of shared storage, you could be causing revalidation to fail randomly. Because you can't guarantee a specific file is assigned the same inode number on each web server, the ETag header generated for it will be different from server to server. And most deploy scripts do not preserve modification time when copying files, which means the Last-Modified header will differ as well.

This is bad for two reasons. First, when Fastly is talking to your origin, there could be a lot of unneeded bandwidth being wasted on full responses instead of 304 responses. Second, Fastly could end up having different values of ETag and Last-Modified on different servers. Since browsers aren't guaranteed to talk to the same server for every request, they could also get unneeded full responses because of a mismatch in values.

To make optimal use of revalidation if you have multiple web servers with local storage, I recommend turning off ETag in favor of using Last-Modified exclusively and making sure that your deploy script preserves modification time when copying files.

If you use a cloud storage provider, like Google Cloud Storage or Amazon S3, the ETag header should already be set to a hash of the content automatically. This makes cloud storage one of the most optimal origins for Fastly, as reuploading the exact same file does not change its ETag.

Further reading

We hope you found this post useful for making the most out of your CDN and browser cache. If you’re interested in fine tuning your performance measuring (and maximizing CDN performance), I recommend these posts from our VP of Technology, Hooman Beheshti: “The truth about cache hit ratios” and “Cache hit ratios at the edge.”

Rogier Mulhuijzen
Senior Professional Services Engineer
Published

8 min read

Want to continue the conversation?
Schedule time with an expert
Share this post
Rogier Mulhuijzen
Senior Professional Services Engineer

Rogier “Doc” Mulhuijzen is a senior professional services engineer and Varnish wizard at Fastly, where performance tuning and troubleshooting have formed the foundation of his 18-year career. When he’s not helping customers, he sits on the Varnish Governance Board, where he helps give direction and solve issues for the Varnish open source project. In his spare time, he likes to conquer all terrains by riding motorcycles, snowboarding, and sailing.

Ready to get started?

Get in touch or create an account.