To buy or to build?
Fastly has a long history of looking at problems from first principles and being unafraid to undertake difficult projects if we know they will benefit our customers. Building our own custom routing infrastructure may have seemed like a foolish undertaking for a then one-year-old startup, but existing networking solutions forced a tradeoff between performance and control. It was only by doing it ourselves that we could achieve both the performance and flexibility that we knew were required — a decision that is still paying dividends to this day.
Applying this principle to Compute@Edge
Similarly, existing serverless technologies force a tradeoff between performance and security. A “cold start” the first time a serverless function is run takes much longer than subsequent “warm starts” where everything has already been initialized. Some technologies partially mitigate this problem by reusing the environment across many requests in order to amortize the cold start costs across many function invocations. But this turns a performance problem into a security problem: there is an entire class of security vulnerabilities created by reusing environments. Attackers can attempt to access “leftover” data from previous invocations or deliberately poison the environment to damage subsequent invocations.
By eliminating cold start costs, we could create a solution that was both performant and secure. So that’s exactly what we did. Compute@Edge startup times are under 35 microseconds, allowing us to give every single request a completely clean operating environment — eliminating the possibility of data persisting between invocations.