When I first joined Fastly two years ago, I had a vague idea of what Content Delivery Networks – CDNs – were and how they worked. Since then I’ve learned quite a bit more from some very bright folks and now have a pretty good grip on things.
It recently occurred to me that there are probably a lot of people who think about CDNs the way I once did, so I decided to share what I’ve learned by writing this article and hopefully in the process showcase some of the ways Fastly is different from traditional CDNs.
What does a CDN do?
Originally, I thought of a CDN as something that made images, scripts and styles “faster”. While this is technically true, there is a better definition of what “fast” means in this context. The goal of any CDN is to reduce latency; roughly defined as the delay between when you request something and how long it takes to receive a response.
Imagine you’re in San Francisco and you’ve requested an image on a server in London, 5300 miles away. It would typically take around 300 milliseconds to send the request and receive the response.
If you were to request the same image from a server in San Jose, which is about 50 miles from San Francisco, it would take about 10 milliseconds to send the request and get the response. That’s 30 times better than the first case but because we’re speaking in terms of milliseconds the difference might be imperceptible.
However, that barely perceptible difference is huge when one considers that a typical webpage can include over 2 megabytes of information spread across 30 requests. Because browsers only make a small number of concurrent requests and each request may involve several round-trips to the server, these milliseconds add up to many seconds, making the website slow… And slow is bad.
Bottom Line: every CDN on the planet moves content closer to the user in order to reduce latency and improve the user experience.
How does a CDN work?
From the examples above, the overall answer is pretty straight forward: a CDN moves content physically closer to the users who are requesting it, serving it faster. In theory this is neat, elegant and self-explanatory. In practice however, there are some pretty gnarly technical challenges.
First, in order to reduce the latency for any particular user, a CDN must have a content caching server – a cache – that’s close to them. Unfortunately it’s not feasible to have a nearby cache for every possible internet user. Instead, we organize the caches into groups called Points of Presence (POPs), distribute them throughout large geographic regions, (Europe, US, Asia, etc.), and then place them in major population centers within those regions.
Next, given a request by a single user a CDN must direct it to the closest POP. Most CDNs do this by leveraging a technology called GeoIP. GeoIP can be thought of as a large lookup table that maps IP addresses to geographic regions, (country, city, etc.). When a request is being processed, a CDN will reference the table and direct the user’s traffic to the closest available server.
Finally, one can think of a cache as a large key-value store. When a request comes in, it’s the cache’s job to determine what the user is requesting, locate the data and send it back to the user. There are many pieces of request information that can be used to determine what content to serve. This can include such things as the domain name, path, query parameters and even headers. Caches employ multi-level lookup tables that use optimized algorithms to find the correct content in the shortest amount of time.
TL;DR: There are a slew of other things to consider when implementing a CDN, but POP distribution and GeoIP are the most important.
How is Fastly different?
Traditionally, when using a CDN, it is the customer’s job to upload content directly to the cache servers. Instead of requiring one initial cache fill, Fastly fetches – and then stores – the content from the customer’s origin server as it’s requested. This method, called “reverse proxying”, eliminates the need to front-load the caches.
When content changes, instead of uploading a new copy of the resource, Fastly’s customers send us a short message instructing our cache servers to invalidate that content. Later, when the invalid content is requested, we fetch and replace the content via the origin. This process, called “instant purging”, allows customers to perform updates in approximately 200ms. With legacy CDNs the upload process can take anywhere from 15 minutes to an hour.
Instant purging also sets Fastly apart from its competitors in a significant way: we make it possible to serve dynamic content. Because any HTTP request can be cached, we simply fetch the dynamic page from origin and our customers issue a purge request when the underlying data-model changes. In some cases it can be as simple as adding a hook in the model-level of an application.
The skinny: Fastly redefines the legacy CDN model through advanced features such as reverse proxying and instant purging.
But that’s just the beginning!
This post has touched upon many of the core ideas behind how CDNs operate and how Fastly is different. In the next few weeks, we hope to go into more detail about such features as advanced routing and cache control that can be used to help you meet your organization’s caching needs. Thanks for reading and remember: Fight Slow, Go Fastly!