Since the August launch of Build on Fastly, our developer library, we have been quietly adding many new solutions: beacon termination, geofencing, numerous flavours of load balancing, and lots of other goodies. Here's a list so you don't miss out on all the new ideas for getting the most out of Fastly.
Build on Fastly combines a huge library of solution recipes and template code with some meticulous walkthroughs of common patterns that help you to understand and further your expertise in programming on the edge.
New patterns in geofencing and beacon termination
A huge number of Fastly customers make use of the ability to target certain countries with different experiences. Whether this is for licencing, regulatory, or localisation reasons, Fastly's geolocation data provides a valuable ability to deliver the right experience to the right user in the right location.
Our solution pattern covering geolocation introduces best practice techniques for getting the most out of geofencing — including the correct use of the Vary header, blocking access directly at the edge, custom regions, and grid calculations.
We've also been talking for a long time about the ability to terminate beacons on Fastly's edge cloud, transform the data, batch and deliver it to your preferred analytics tool. Mostly because our real-time logging system is something we're very proud of and we like solutions that give you a sense of what you can do with it.
Beacon termination is a great way to reduce origin load and an incredibly efficient way to collect analytics. But not as many developers have made use of the ability to transform data into CSV, JSON, query string or structured header, or to enrich data with the wealth of information made available by Fastly in VCL. The web standard Navigator.sendBeacon is also underused as a better alternative to
onUnload handlers or tracking pixels.
With our beacon termination pattern, we hope to help you to extract even more value from our real-time logging system.
Strategies for directors and load balancing
You'll probably already know that Fastly supports load balancing, but did you know we support four different standardised director strategies? A director is a group of origin servers that can pretend to be a single origin server for the purposes of assigning a backend to a request, and then will internally use the specified strategy to pick the most appropriate origin from the pool.
We published new recipes for all of the strategies available:
Random: for spreading traffic across origins in a weighted manner. Useful for conventional load balancing.
Consistent hash: for mapping requests based on the content requested. Useful when origins have their own caches and you want to use them as efficiently as possible by fetching the same piece of content always from the same origin.
Fallback: for selecting the first origin that works. Useful when you want your origins to operate in an active/standby configuration.
Client: for mapping requests based on the user's identity. Useful for so-called "sticky" sessions, where a user is consistently mapped to a particular origin server for the duration of their session.
Of course, you can also write your own load balancing logic. The geofencing solution linked above is one example of a load balancing strategy. Another, routing to origins based on a URL path fragment, is often called microservices. We've had a microservices recipe for a long time!
Web Application firewall ideas (WAF)
WAF is necessarily a complex product, and many customers rely on our solutions team to help tune a WAF to be appropriate for their needs. But that doesn't mean it needs to be a black box! Like almost all of the Fastly platform, you can control much of how WAF behaves by writing your own code.
For starters, you might want to understand what kinds of vulnerabilities we can identify. We've published a recipe that simulates all the common exploit types, and demonstrates WAF catching and blocking them.
If you're not happy with the 15,000 or so rules that we have available within our WAF rules engine, why not write your own rules? You can still benefit from the standard WAF logging mechanism, so violations will still be reported along with all your other rules, plus you can use the same scoring buckets (you won't see these rules in the WAF UI though).
Looking to confuse attackers? Maybe try randomising responses in the event of a block!
And you might not want to run WAF on all requests. We generally recommend that it only runs on requests to your origin (i.e., no need to run it if we are going to serve content from cache). But maybe you don't need WAF for requests to static object backends like GCS or S3, so you could consider having a custom condition for triggering WAF.
New features: segmented caching and binary synthetics
2019 was the year we launched segmented caching, and the platform also gained the ability to encode binary responses directly into your configuration artifact, allowing them to be served in a few microseconds, faster even than a cache lookup in many cases.
Adopting segmented caching is a challenge for some customers, because it requires a full cache purge. No problem. Here's a recipe for enabling segmented caching gradually across your content.
Incidentally, and kind of the opposite feature, we've been asked to make a solution that automatically drops a cache at a particular time each day. This is generally not a very good idea, but customers coming from other platforms have needed it for migrations, so fair enough: we present: a recipe for scheduled invalidation!. Use at your own risk!
We also added a new VCL statement,
synthetic.base64, which creates a response body from a base64 encoded string. This enables binary objects to be encoded directly into your source code. Get your favicon on.
Hello Azure and GCS
For a long time we've had a recipe for connecting to AWS S3 private buckets as origins. We recently added Azure and GCS to that list. Our partnership pages provide more details on the work we're doing with Azure and our collaboration with Google Cloud Platform.
Advanced image optimization
Want to use our Image Optimizer, but don't want to add ugly optimization params to your image URLs? Try our recipe for image transformation classes, and turn that
/image/large.png. Ahh, that's better (and also significantly reduces the potential for cache fragmentation too).
New ideas in preflight: threat intelligence?
The pattern of using an origin to enrich a request and then restarting the edge processing to send the request to a different origin is well known — we call it “preflighting.” Here's a cool idea to use a threat intelligence backend to flag requests containing leaked passwords before they get to your origin.
Most of these solutions don't take advantage of new Fastly features, but with a programmable platform there are a virtually limitless range of things you can do with it, so I hope you find this new tranche of ideas helpful. Let us know if you have a cool idea we can feature in our developer library.