The future of the edge

The following post is based on CTO Tyler McMullen’s talk at Altitude, which focused on the future of the edge. Read the full recap of our customer summit here.

CDNs are stuck. We’re doing the best that we can with the current model CDNs use: we’re able to pass through writes and pull content from origin, which lets us cache static assets and content that changes frequently. What we can’t do is (effectively) cache responses that change on every request, that are different for every user, and that modify state at the origin. That is, we can’t do anything with writes. Where does that leave us?

In this post, I’ll explore “the future of the edge,” or the next logical step in how we streamline online experiences. In order to keep up with the direction things are headed, we need to combine logic and data at the edge. Logic without data, without state, is insufficient.

Where we were

The internet of the 90s was a very different place. It was mostly static content, simpler, and far less interactive. CDNs starting out only caching static content — this made sense because almost everything on the internet was static. Because internet backbones were largely underprovisioned, serving static content was the major problem that CDNs needed to solve.

But the internet has evolved — things have changed a lot between 1999 and 2010:

  • Infrastructure improved. Suddenly those backbones weren’t quite so congested; we’re running 100+ gigabit/second lines under the oceans everywhere.  

  • Interaction changed. We started building apps differently — there’s more dynamic content, more user interaction, and apps are much more complex than they used to be. There’s also different types of apps — APIs, single-page apps (SPAs), and mobile apps, not to mention social networks and massive online games and all the challenges those imply (e.g., going from 0 to 10M users in the space of a few days).

CDNs responded to these changes by… continuing to cache static content. The way that CDNs were used by most sites did not change significantly within that period of time.

Catching up with the internet

In 2011, Fastly responded to the changing internet landscape by making it possible to cache event-driven content. TL;DR: event-driven content is content that’s not actually dynamic, but was traditionally seen as such (and therefore assumed to be uncacheable). It stays the same until someone takes an action — like commenting on a blog post or editing wiki articles. We addressed event-driven content by creating Instant Purge, which lets our customers update content in 150 milliseconds. This change was a long time coming — the way we interacted with the web had changed long before CDNs did.

The internet is still changing, however. An increasing number of sites and applications require:

  • Personalization. More sites are changing to be highly personalized. Take Google News, for example — the content I see is entirely different from what you see. The same is true with the product recommendations on Amazon.

  • Interactivity. More sites and apps are highly interactive — Pokémon Go is a beautiful example of this. By trying to catch Pokémon, you’re modifying states — all of this requires going back to origin.

CDNs are stuck

Personalization and interactivity are problems CDNs can’t currently solve. We’re doing the best that we can with with the current model CDNs use. We’re able to pass through writes and pull content from origin, which lets us cache static assets and content that changes frequently. What we can’t do (effectively), is cache responses that change on every request, that are different for every user, and that modify state at the origin.

Is edge compute the answer?

It’s supposed to be. In case you’re unfamiliar, edge compute is basically pushing logic to the edges of the network. The idea is not a new one, and it has many applications. Some of those applications have been highly successful. One just needs to compare the richness of browser-based applications of today to the websites of the olden days to see this in action. Modern smartphones are also a good example of this when compared with the limited abilities of cellphones from the 90s and early 2000s.

CDNs have also attempted to apply this idea in the past. Adding the ability to respond to requests at the CDN layer without having to call back to an origin seems like a great idea. Moving difficult-to-scale pieces of your business logic out to the edge of the network and letting it use the massive scale of CDNs should be an easy sell. However, by and large, it has not caught on. Why?

Data: the missing piece

Right now, your data doesn’t live at the edge — your assets do. The results of your data live at the edge. But logic without data, without state, and without the information that makes your application actually work, is not sufficient. Your infrastructure logic can live at the edge right now — you can choose how to do load balancing, routing requests, and modify requests and responses — but not the core of your application logic. You can’t yet actually move business logic to the edge. The current vision of edge compute doesn’t address this, and therefore doesn’t solve the problem.

Personalization requires data — you have to know about the user in order to know what to give them. Interactivity requires data — users interact with and modify the world you’ve built. Data is the piece that’s missing — you need to be able to read and write to it from the edge, and globally replicate it. This would actually make edge compute useful — logic + data means you can build real apps, real interactions, at the edge.

How do we get there?

This is by no means an easy problem. Anyone who has worked on large-scale distributed systems know that this is much, much harder than the current vision of edge compute.

We need to address:

  • Global replication. We need a way to get data around the world very rapidly and reliably.

  • Storage. We need a way to partition, store, and very rapidly query data.

  • APIs. We need a way to interact with data at the edge.

  • Consistency. We need a way to deal with concurrent updates at multiple locations.

Addressing these problems could be seen as a big departure from what CDNs typically do, but many of the pieces are already in place. In fact, Fastly has already addressed three of the four above problems in the past few years:

  • We haven’t just built purging, but a fast, reliable global broadcast system. (e.g., a way to get from New Zealand to London as quickly as possible.)

  • Our SSD storage engine we wrote for our version of Varnish is actually a fast, scalable, distributed storage system, and in fact already scales across thousands of servers.

  • Users of Fastly have always been able to program the CDN using VCL. VCL and our recently released Edge Dictionaries feature are actually an API for using arbitrary data in edge logic.

And, as it turns out, recent research can help us with the fourth.

Addressing consistency

As mentioned above, we need a way to deal with consistency. Ensuring consistency and convergence ends up being one of the trickiest problems with edge data, and is likely what’s prevented others from implementing it already.

Techniques that are commonly used for ensuring consistency of data across computers in a single data center or region are difficult or impossible to apply to this problem. What we need is something more akin, in essence, to those used in collaborative editors such as Google Docs — With Google Docs, you don’t wait for data to synchronize before applying the edit. There can’t ever be a conflict because the data structures are designed to always be mergeable.

In order to make a system like this work at a global scale, we must avoid synchronization. We can’t delay an update made in London while we wait to hear from servers in New Zealand on whether we can safely apply the update. In other words, every server must be able to operate independently.

Luckily, distributed systems research has not been idle, and techniques now exist for addressing this problem. The Berkeley Orders Of Magnitude (BOOM) has made important progress on how to avoid coordination in large-scale distributed systems, and the SyncFree group has pioneered ideas like Conflict-free Replicated Data Types (CRDTs), which make it possible to design data structures that can be updated concurrently around the world, without ever requiring synchronization.

In short, there are now ways to address the last major hurdle to implementing edge data.

The future: data at the edge

The current model of CDNs has reached a fundamental limit, but just like the advent of fast config changes, real-time stats, and Instant Purge, if the rest of the industry is not heading in this direction it will be soon.

The pieces we need to build data + logic at the edge exist, so we know it’s not impossible. It will require a lot of engineering and product effort to build, but what becomes possible when this exists in a stable, reliable, and scalable way?

Data at the edge is the answer: it’s necessary to make edge compute useful and to move past where we currently are.

Watch the full video of Tyler’s talk below, and keep an eye on our blog for more Altitude 2016 talks.

Published

7 min read

Want to continue the conversation?
Schedule time with an expert
Share this post

Tyler McMullen is CTO at Fastly, where he’s responsible for the system architecture and leads the company’s technology vision. As part of the founding team, Tyler built the first versions of Fastly’s Instant Purging system, API, and Real-time Analytics. A self-described technology curmudgeon, he has experience in everything from web design to kernel development, and loathes all of it. Especially distributed systems.

Ready to get started?

Get in touch or create an account.