How Compute is tackling the most frustrating aspects of serverless

Serverless promises a huge advantage: you no longer need to create and dedicate a specific amount of server resources for your applications. Instead, resources are spun up as required and you only pay for what gets used. Taking non-development tasks off your plate means more time to write applications and solve interesting problems. It also simplifies operations, increases agility, and optimizes and reduces cloud spending — a big win for management. 

But not all serverless solutions are created equal. Many may actually slow down your development cycles and add complexity to your stack in addition to impacting app performance downstream. Many of the folks we’ve talked to have shared that this prevents them from confidently migrating significant portions of their architecture to serverless. 

This has been especially true with the first generation of serverless architecture. Cold starts, regional latency, and observability are among the most commonly perceived challenges. But Compute, our serverless computing environment built on WebAssembly and Lucet, ushers in a new generation of serverless to address many of those problems. Here’s how Compute@Edge is addressing these frustrating constraints. 

Cold starts

A cold start, the delay that happens when you execute an inactive serverless function, is one of the most documented issues with serverless. Increasingly, voices in the community are helping serverless developers find ways to keep their functions warm or evaluate the overall impact cold starts have on their application. Though benchmarks vary widely, cold starts can add anywhere from 100 milliseconds to 4 minutes in startup delays. Current solutions range from custom warming systems to Amazon’s Provisioned Concurrency, neither of which are simple or serverless.

But we had a different idea: what if instead of building complex systems to avoid cold starts you could simply prevent them entirely? The reason cold starts are, well, cold, is because there’s no available container warm and ready to run your function. Even platforms built on V8 include millisecond-level start times. But because we built our own WebAssembly-based compiler and runtime for Compute, startup times are 100x faster than other offerings in the market. 

Eliminating cold starts opens new possibilities for serverless computing. The significant reduction in latency means there are actually very few applications that can’t be built using the serverless model.

Regional latency

Regional latency is another cause of unpredictability. The most popular serverless solutions ask customers to choose a single region to run their serverless logic. Unless you’re serving requests from a single region, picking just one doesn’t always make sense — the added latency of sending requests from around the world to a centralized location will affect a large swath of your audience. 

Compute, however, is designed to run logic simultaneously across thousands of servers around the world, milliseconds away from end users. This eliminates the roundtrip latency associated with executing logic in a centralized cloud region.

Observability

For all the good serverless does to diminish operational challenges, it’s undeniable that application architecture itself is still getting more complex. Applications commonly include multiple microservices, run across multiple clouds, and leverage more than one CDN. Stitching together an end-to-end view of performance across a heterogeneous and increasingly distributed system is no small feat.

An entire category of enterprise software is devoted to addressing this problem. But serverless computing platforms don’t provide the same breadth and depth of analytics that you get with other parts of your infrastructure. If something goes wrong, it’s pretty tough to diagnose and fix issues.

Compute builds on Fastly’s heritage of real-time logs to offer logging, tracing capabilities, and granular, real-time metrics out of the box. With it, you can expose performance and tracing data using industry-standard tools and export into a third-party system. And since you can use this data to enhance your business processes, like assigning arbitrary values to metrics, you can do things like measure the revenue impact of an outage or error.

Lastly, it’s worth noting that Compute’s approach to predictability extends beyond performance and visibility to address security, too. Compute’s isolation technology creates and destroys a sandbox for each request flowing through our platform in microseconds. This removes an entire class of security vulnerabilities, side-channel attacks, and holistically minimizes your attack surface area while maintaining the ability to scale and perform. 

Build without boundaries 

If you’ve read this far I hope you’re already thinking about what it could mean for your application’s performance to run logic with greater predictability and visibility — which will help both you and your company move faster. If you want to take it further, we’d love for you to join the Compute beta

With 10 commands or less, you can create services, deploy templates, and check that your applications are running around the world. It’s supported by our complete documentation and developer hub, which has recipes, templates, and server kits, making it easy to create at the edge. Sign up to receive email updates about Compute. 

MJ Jones
Principal Product Manager, Compute
Published

4 min read

Want to continue the conversation?
Schedule time with an expert
Share this post
MJ Jones
Principal Product Manager, Compute

As the product manager of Compute, MJ guides the development of features for Fastly's serverless compute offering. Before joining Fastly, MJ put his data-driven product management approach to work at Riot Games, Google, GoGuardian, and others.

Ready to get started?

Get in touch or create an account.