New ways to compose content at the edge with Compute@Edge
Most content served through Fastly is created in our customers' servers, but it doesn't have to be. It's always been possible to create content programmatically, and with the advent of Compute@Edge — which is used to build, test, and deploy code in our serverless compute environment — we've made it possible to create and transform content more efficiently and powerfully than ever before.
Traditionally, a content delivery network (CDN) sits between your web servers and your end users and caches your content so that it's faster than fetching it all the way from the origin, especially for users who are physically a long way away from where your servers are. This is an essential component of most modern websites that aim to operate performantly at scale.
So when having a CDN in front of your site is essential, it is a shame to still generate all the content in one place — when you have such a powerful tool for processing requests, sitting so close to the end user, why not make it a more integral part of your infrastructure?
Getting started: Robots, redirects and CORS
CORS, or cross-origin resource sharing, is a great example of something that the edge is ideal for. When requesting cross-origin resources, browsers may need to issue a preflight OPTIONS request to validate that the target host is willing to service the request. The response to this needs to cite the name of the origin making the request, so it's not an entirely static piece of content, but it is simple enough to generate at the edge, whether on Compute@Edge or in VCL. Hit RUN below to try it on a Fastly VCL service:
Simple synthetic responses like this can address all kinds of use cases. Another common one is redirects. Whether it's HTTP to HTTPS, normalizing of a hostname (e.g. adding www. to an apex domain — or removing it!), or mapping old paths to new ones.
Handling redirects at the edge like this is a great way to reduce clutter at your origin as well as providing a potentially massive performance boost to end users.
Of course, synthetic responses can contain body content too. In a microservice architecture you might be stuck for where to serve files like robots.txt from. Do it from the edge!
Leveling up: edge-served APIs and beacon termination
Fastly is often used as an API gateway (and we're an extremely good one), but have you considered serving API responses at the edge without needing to pass them to origin? This is especially useful when you want to give clients access to information that is already available within Fastly, like geolocation data. Try it in VCL:
There's a huge wealth of information Fastly surfaces to your edge applications automatically, or you can add your own key-value datasets using Edge Dictionaries.
And while we're on the subject of APIs: we've talked extensively before about terminating beacons at the edge — all those metrics you're collecting from clients to measure performance and user experience — there's no need to bombard your own servers with these. With a bit of help from our real-time log streaming (one of my personal favourite Fastly features), you can clean, validate, enrich, and aggregate the inbound data, and then send it directly from Fastly to anything from S3 and Google Cloud Storage to BigQuery or Splunk, or even your own collector servers.
Maximum value: fully templating complex pages at the edge
So far, so good. All of these solutions have been successfully powering our customers' sites for years and are now achievable in both VCL or Compute@Edge. One thing VCL has never been able to do though is compose a response body by processing API data from upstream. This is incredibly powerful — imagine being able to query and load data efficiently from multiple sources in parallel, including some which may be available in cache, and then generating HTML pages using templates stored at the edge.
Less origin traffic. Easier debugging and separation of concerns. Faster end user performance. This is nothing short of moving the entire view layer of your application to the edge.
My colleague Kailan Blanks recently added a tutorial to our developer hub with a step-by-step walkthrough of how to do this with a weather app using Rust on Compute@Edge. Here is the resulting app running in a frame on this very page:
I love this example because it neatly demonstrates all the necessary building blocks for creating huge applications at the edge:
API requests to origin: We send requests to Openweathermap through the Fastly edge cache, using the send() method of our Request struct in the Rust SDK.
Community libraries: We're parsing API responses from Openweathermap with serde_json and then composing beautiful HTML pages with tinytemplate. You can use any modules or libraries that compile to WebAssembly.
In many cases where content is considered “uncacheable,” it's because it is composed from lots of different bits of data combined together. For example, a fully formed HTML page might have the current user's name, an article based on the URL, and links based on the country that the page is being requested from.
The Vary header offers a powerful mechanism for keeping these kinds of variations of a cacheable object separate, but there may come a point where there are too many variations to be able to use this method. That's where templating directly at the edge is so powerful! Each of the individual elements of the data that go into the composition of the page can be cached separately, and dynamic pages can be assembled by piecing together all the appropriate pieces of content.
I wrote a few months ago about new ways of thinking about designing “edge-native” applications, but moving some templating to the edge is probably easier than you think. Why not pick a page where you want to improve performance, and give it a go?