Call usTry Fastly free

Introducing Nearline Cache, the first of our commercial solutions to be built in our serverless compute environment — but not the last

The flexible, decentralized nature of serverless makes it easier to run advanced applications and execute custom logic as close to end users as possible. And with production traffic running on Compute@Edge, which is used to build, test, and deploy code in our serverless compute environment, our customers are bringing to life a wide range of next-gen services that show the promise of building on a distributed edge compute platform. But they’re not the only ones — we’re building on it ourselves to deliver our next generation of products.

Today, we launched Nearline Cache, the first of our commercial solutions to be built in our serverless compute environment, and we’re already feeling the benefits across our engineering teams. 

Nearline Cache allows you to automatically populate and store content in third-party cloud storage near one of our POPs without incurring egress costs, addressing a very real challenge for long-tail content that might get evicted from cache. With Nearline Cache, you can populate that content back into cache, resulting in overall cost savings and improved origin offload. Plus, there’s minimal latency and no new work for customers, as Nearline Cache auto-populates itself on the first cache miss asynchronously. 

We originally built Nearline Cache as a custom solution using a cloud service provider, with the plan to build it into a full product using Compute@Edge. I interviewed two of the engineers who worked on this project to hear — in their own words — how migrating Nearline Cache onto our serverless compute environment using Compute@Edge affected performance and development cycles and got our team thinking through more long-term capabilities. 

Read on for my chat with Fastly Edge Applications Engineer Craig Campbell and Senior Manager for Fastly Engineering Anyell Cano.

Why did we choose to migrate Nearline Cache from a cloud service provider to our serverless compute environment? 

Anyell: We originally designed the solution to run on a cloud provider but saw an opportunity to benefit from the advanced performance, scale, and security functionalities our serverless compute environment brought, so we decided to migrate there using Compute@Edge. This work happened fast because its performance, flexibility, and ease of use made Compute@Edge the right place to build this application. 

I heard you used the same documentation, tools, and support our customers get for Compute@Edge. What was that like? 

Craig: I found the Compute@Edge documentation to be quite good for getting started with building in our serverless compute environment. I also really liked using the Fastly Command Line Interface (CLI) and being able to stay in one place to build, deploy, and test. Actually, testing was something we really struggled with when we were using and maintaining Nearline Cache with our cloud service provider, but with the Fastly CLI, I could deploy and test right there. It was super straightforward. 

At the very beginning, having the initial Starter Kit was super helpful when I didn’t know a lot. I easily got set up, got familiarized with the request and response flows, and started digging into Rust (the most mature development language available with Compute@Edge at the time) — it was all there for me, which was incredibly helpful. 

Anyell: It also felt like every question was answered quickly and he was able to find what he needed in the Fastly Developer Hub. This was helpful not only for us, but also for future users of Compute@Edge. Since we were effectively dogfooding the product, we were able to provide useful feedback to our Docs team to help them improve documentation for customers who build on us.

Can you tell me a little more about what went into learning Rust?

Craig: It’s pretty funny because my first thought was, if I was writing this in Go I would be able to have this done in a week. I actually was a little intimidated to learn Rust out of the gate, but I quickly became comfortable with it. There was a bit of a learning curve but it is a popular language with tremendous community support behind it, which ultimately made things easier for me.

I did a lot of learning myself, but there was also a lot of material available to me. There were some existing code samples in the Fastly Developer Hub, and I could have just copied and pasted — but in the beginning I tried to implement on my own for my own learning. However, now I am constantly using the example code available. 

How would you describe the process of maintaining and versioning the code? How did your experience building in Compute@Edge differ from building in VCL?

Craig: I have done a fair amount of development work using VCL (Varnish Configuration Language) even in my past jobs. I found that working in Compute@Edge was a lot less limiting, and I could write unit tests as part of the continuous integration flow. That was huge. 

It actually helped speed up development velocity quite a bit. Our serverless compute environment allowed us to not only reduce our own costs, but we were able to reduce the number of requests we needed to make, since we were able to push content directly from our CDN and bypass third parties.

Building in our serverless environment also made things super simple: now we have a single application that does everything, so for maintaining and scaling, it’s way better. When I first started working on this project, I actually had to make a small improvement on our cloud service provider side because we were still maintaining the application there, and in order to get it up and running and deployed, it was way more work than what I was doing with Compute@Edge. Our CLI let us build, deploy and then all of a sudden we were already testing it. 

What do you see happening next for Nearline Cache in our serverless compute environment?

Anyell: I think leveraging our serverless compute environment has helped us think through some goals and directions moving forward. One of the things we're going to be figuring out very soon is how to automate the integration between our serverless compute environment and VCL, and we're thinking through how to go beyond with our observability capabilities for Compute@Edge. 

I’m glad you brought that up. Let’s dig in on the observability piece for a minute, can you tell me more about that? 

Craig: There were some features we used that were readily available and we loved using them. We actually relied quite heavily on log tailing and integrating with our large observability stack, like BigQuery. We also  found some potential areas for improvement which we were able to feed back to the team. 

Log tailing was great though, and I basically used it nonstop during development. I would do all the dev, deploy to production, and then debug using log tailing. It made things way easier than what I was doing in terms of workflow on the cloud service provider we were using. 

Would you recommend using Compute@Edge for other Fastly products? Why? 

Anyell: Definitely. In fact, going forward we will look to build more of  Fastly’s applications leveraging Compute@Edge and the benefits of serverless where it makes sense. We don’t want to miss out on the ease of use and the performance benefits we saw with Nearline Cache, plus it gives us the ability to build faster and more reliably. We’re excited to see what other improvements we find.

Craig: With Compute@Edge, there is a lot more control. You can have unit tests, refactor, the workflow was great. Especially if it is a new Fastly service, I would probably build it in our serverless compute environment. 

How would you sum up the experience of migrating Nearline Cache to our serverless compute environment? 

Craig: A big thing for me was that we were able to meet our goals for this project in a very short period of time. Compute@Edge gave us the flexibility to build and deploy Nearline Cache in only a fraction of the time it would have taken otherwise.

Anyell: Agreed. Building with Compute@Edge’s functionality was clearly the best choice when it came to Nearline Cache. Not only did it give us a robust, scalable, and developer-friendly way to build applications at scale, but it allowed us to innovate much faster.


This article contains “forward-looking” statements that are based on Fastly’s beliefs and assumptions and on information currently available to Fastly on the date of this press release. Forward-looking statements may involve known and unknown risks, uncertainties, and other factors that may cause its actual results, performance, or achievements to be materially different from those expressed or implied by the forward-looking statements. These statements include, but are not limited to, those regarding the expected benefits and functionality of Compute@Edge and Nearline Cache, including enabling faster innovation and lowering costs, scaling with better performance, enhanced visibility, and reduced latency, and speeding up the develop, test, and deploy cycle, Fastly’s ability to find and develop new and improved features such as automated integration and observability capabilities, how future Fastly applications will be built, and Fastly’s ability to build faster and more reliably. Except as required by law, Fastly assumes no obligation to update these forward-looking statements publicly, or to update the reasons actual results could differ materially from those anticipated in the forward-looking statements, even if new information becomes available in the future. Important factors that could cause Fastly’s actual results to differ materially are detailed from time to time in the reports Fastly files with the Securities and Exchange Commission (SEC), including in Fastly’s Annual Report on Form 10-K for the fiscal year ended December 31, 2020, and our Quarterly Reports on Form 10-Q. Copies of reports filed with the SEC are posted on Fastly’s website and are available from Fastly without charge.

Share this post