Subscribe to our newsletter
Get the latest news and industry insights in your inbox.
Subscribe to our newsletter
Thanks for subscribing.
The following post is based on CTO Tyler McMullen’s talk at Altitude, which focused on the future of the edge. Read the full recap of our customer summit here.
CDNs are stuck. We’re doing the best that we can with the current model CDNs use: we’re able to pass through writes and pull content from origin, which lets us cache static assets and content that changes frequently. What we can’t do is (effectively) cache responses that change on every request, that are different for every user, and that modify state at the origin. That is, we can’t do anything with writes. Where does that leave us?
In this post, I’ll explore “the future of the edge,” or the next logical step in how we streamline online experiences. In order to keep up with the direction things are headed, we need to combine logic and data at the edge. Logic without data, without state, is insufficient.
The internet of the 90s was a very different place. It was mostly static content, simpler, and far less interactive. CDNs starting out only caching static content — this made sense because almost everything on the internet was static. Because internet backbones were largely underprovisioned, serving static content was the major problem that CDNs needed to solve.
But the internet has evolved — things have changed a lot between 1999 and 2010:
CDNs responded to these changes by… continuing to cache static content. The way that CDNs were used by most sites did not change significantly within that period of time.
In 2011, Fastly responded to the changing internet landscape by making it possible to cache event-driven content. TL;DR: event-driven content is content that’s not actually dynamic, but was traditionally seen as such (and therefore assumed to be uncacheable). It stays the same until someone takes an action — like commenting on a blog post or editing wiki articles. We addressed event-driven content by creating Instant Purge, which lets our customers update content in 150 milliseconds. This change was a long time coming — the way we interacted with the web had changed long before CDNs did.
The internet is still changing, however. An increasing number of sites and applications require:
Personalization and interactivity are problems CDNs can’t currently solve. We’re doing the best that we can with with the current model CDNs use. We’re able to pass through writes and pull content from origin, which lets us cache static assets and content that changes frequently. What we can’t do (effectively), is cache responses that change on every request, that are different for every user, and that modify state at the origin.
It’s supposed to be. In case you’re unfamiliar, edge compute is basically pushing logic to the edges of the network. The idea is not a new one, and it has many applications. Some of those applications have been highly successful. One just needs to compare the richness of browser-based applications of today to the websites of the olden days to see this in action. Modern smartphones are also a good example of this when compared with the limited abilities of cellphones from the 90s and early 2000s.
CDNs have also attempted to apply this idea in the past. Adding the ability to respond to requests at the CDN layer without having to call back to an origin seems like a great idea. Moving difficult-to-scale pieces of your business logic out to the edge of the network and letting it use the massive scale of CDNs should be an easy sell. However, by and large, it has not caught on. Why?
Right now, your data doesn’t live at the edge — your assets do. The results of your data live at the edge. But logic without data, without state, and without the information that makes your application actually work, is not sufficient. Your infrastructure logic can live at the edge right now — you can choose how to do load balancing, routing requests, and modify requests and responses — but not the core of your application logic. You can’t yet actually move business logic to the edge. The current vision of edge compute doesn’t address this, and therefore doesn’t solve the problem.
Personalization requires data — you have to know about the user in order to know what to give them. Interactivity requires data — users interact with and modify the world you’ve built. Data is the piece that’s missing — you need to be able to read and write to it from the edge, and globally replicate it. This would actually make edge compute useful — logic + data means you can build real apps, real interactions, at the edge.
This is by no means an easy problem. Anyone who has worked on large-scale distributed systems know that this is much, much harder than the current vision of edge compute.
We need to address:
Addressing these problems could be seen as a big departure from what CDNs typically do, but many of the pieces are already in place. In fact, Fastly has already addressed three of the four above problems in the past few years:
And, as it turns out, recent research can help us with the fourth.
As mentioned above, we need a way to deal with consistency. Ensuring consistency and convergence ends up being one of the trickiest problems with edge data, and is likely what’s prevented others from implementing it already.
Techniques that are commonly used for ensuring consistency of data across computers in a single data center or region are difficult or impossible to apply to this problem. What we need is something more akin, in essence, to those used in collaborative editors such as Google Docs — With Google Docs, you don’t wait for data to synchronize before applying the edit. There can’t ever be a conflict because the data structures are designed to always be mergeable.
In order to make a system like this work at a global scale, we must avoid synchronization. We can’t delay an update made in London while we wait to hear from servers in New Zealand on whether we can safely apply the update. In other words, every server must be able to operate independently.
Luckily, distributed systems research has not been idle, and techniques now exist for addressing this problem. The Berkeley Orders Of Magnitude (BOOM) has made important progress on how to avoid coordination in large-scale distributed systems, and the SyncFree group has pioneered ideas like Conflict-free Replicated Data Types (CRDTs), which make it possible to design data structures that can be updated concurrently around the world, without ever requiring synchronization.
In short, there are now ways to address the last major hurdle to implementing edge data.
The current model of CDNs has reached a fundamental limit, but just like the advent of fast config changes, real-time stats, and Instant Purge, if the rest of the industry is not heading in this direction it will be soon.
The pieces we need to build data + logic at the edge exist, so we know it’s not impossible. It will require a lot of engineering and product effort to build, but what becomes possible when this exists in a stable, reliable, and scalable way?
Data at the edge is the answer: it’s necessary to make edge compute useful and to move past where we currently are.
Watch the full video of Tyler’s talk below, and keep an eye on our blog for more Altitude 2016 talks.