Call usTry Fastly free

Introducing Compute@Edge Log Tailing for better observability and easier debugging

The appeal of serverless computing is undeniable — there’s high value in decentralizing architectures and deploying globally without needing to pre-plan resource availability. So it’s no wonder that Gartner predicts that by 2025, half of global enterprises will have deployed serverless computing, up from only 20% now. 

We’ve been enhancing our own powerful serverless compute environment — Compute@Edge, now running live production traffic — by making it easier to solve edge use cases using familiar programming languages, as well as build advanced applications and custom logic as close to end users as possible. Today, we’re releasing another improvement that brings the promise of serverless computing one step closer. 

Compute@Edge Log Tailing allows customers to read logs in near-real-time without having to use a third-party device and allows for quick debugging of edge applications during the development process. Let’s dig in on the why and how. 

Observability at the edge

Compute@Edge’s enhanced observability capabilities feature real-time logging and stats, as well as the ability to expose tracing data using industry-standard tools and exporting them into a third-party system. However, serverless computing is often just one part of an organization’s overall application development strategy, alongside a mix of other technologies (2019 Community Survey). Stitching together an end-to-end view of performance across a heterogenous and increasingly distributed system is no small feat. 

Until today, in order to view application logs and debug data from an active Compute@Edge Wasm service, a user had to configure a third-party logging management tool and add a logging endpoint on their service. This means that simple and fast debugging has remained more difficult than we’d like. Log Tailing changes that. 

Unlike other debugging features in serverless computing solutions, Log Tailing allows developers to directly stream their own custom log messages within their terminal of choice using the Fastly Command Line Interface, while testing their applications running on Fastly’s Compute@Edge platform. With it, developers have real-time visibility of their edge application’s stdout and stderr outputs so they can efficiently resolve issues without having to configure and pay for any additional third-party log management services (however, customers still need to configure a third-party log management service to archive logs for long-term storage and analysis). 

Access to logs in near-real-time from the Compute@Edge CLI enables quick debugging during the dev-push-validate loop. This means applications can be brought to production sooner with the ability to develop and debug in one place. This provides a critical boost to serverless observability data in recent and short time frames that support quick development cycles.

How is it used? 

To show how Log Tailing might be used, we can start with the Compute@Edge default starter kit for Rust, which demonstrates routing, simple synthetic responses, and overriding caching rules. We’ll modify the starter kit to include multiple print statements in which some will print to stdout and some will print to stderr. 

// Pattern match on the request method and path.
match (req.method(), req.uri().path()) {
// If request is a `GET` to the `/` path, send a default response.
(&Method::GET, "/") => {
println!("Hello from the root path!");
Ok(Response::builder()
.status(StatusCode::OK)
.body(Body::from("Welcome to Fastly Compute@Edge!"))?)
}
// If request is a `GET` to the `/backend` path, send to a named backend.
(&Method::GET, "/backend") => {
println!("Hello from the /backend path!");
// Request handling logic could go here...
// E.g., send the request to an origin backend and then cache the
// response for one minute.
*req.cache_override_mut() = CacheOverride::ttl(60);
Ok(req.send(BACKEND_NAME)?)
}
// If request is a `GET` to a path starting with `/other/`.
(&Method::GET, path) if path.starts_with("/other/") => {
println!("Hello from the {} path", path);
// Send request to a different backend and don't cache response.
*req.cache_override_mut() = CacheOverride::Pass;
Ok(req.send(OTHER_BACKEND_NAME)?)
}
// Catch all other requests and return a 404.
_ => {
let client_ip = downstream_client_ip_addr().ok_or(anyhow!("could not get client ip"))?;
let geo = geo_lookup(client_ip).ok_or(anyhow!("no geographic data available"))?;
eprintln!("Bad request to path {} from someone in {}, {}", req.uri().path(), geo.city(), geo.country_code3());
Ok(Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::from("The page you requested could not be found"))?)
}
}

This is a very simple example that allows you to debug a Wasm application to see what paths are being hit. After deploying this application, we can use curl to test our path logic by watching the Log Tailing output without needing to set up our own logging endpoint.

In the following example, the application is deployed to

https://logtail-demo.edgecompute.app/

To get started with the Log Tailing service, those of you with access to Compute@Edge today will just need to reach out to your account team to get set up.

$ fastly logs tail --service-id=<redacted>
SUCCESS: Managed logging enabled on service <redacted>

In another terminal, we run curl against our application:

$ curl https://logtail-demo.edgecompute.app/
$ curl https://logtail-demo.edgecompute.app/backend
$ curl https://logtail-demo.edgecompute.app/other/path
$ curl https://logtail-demo.edgecompute.app/notfound

The Log Tailing output will then print the stream (stdout for println and stderr for eprintln), an unique request ID, and the message.

$ fastly logs tail --service-id=<redacted>
SUCCESS: Managed logging enabled on service <redacted>
| stdout | 4718faa4 | Hello from the root path! |
| stdout | 28dae28d | Hello from the /backend path! |
| stdout | 8b90d6a6 | Hello from the /other/path path |
| stderr | 9515856c | Bad request to path /notfound from someone in ft collins, USA |

If there are multiple println or eprintln calls per request, they will be ordered and grouped in the output. We can modify our "/" handler to show this:

(&Method::GET, "/") => {
println!("Hello from the root path!");
println!("2nd println in this request");
println!("3rd println in this request");
println!("4th println in this request");
println!("5th println in this request");
Ok(Response::builder()
.status(StatusCode::OK)
.body(Body::from("Welcome to Fastly Compute@Edge!"))?)
}

If we were to run two curl commands to https://logtail-demo.edgecompute.app/ at the same time, we would see the output grouped by the unique ID:

| stdout | 57bc24b2 | Hello from the root path! |
| stdout | 57bc24b2 | 2nd println in this request |
| stdout | 57bc24b2 | 3rd println in this request |
| stdout | 57bc24b2 | 4th println in this request |
| stdout | 57bc24b2 | 5th println in this request |
| stdout | 6e40d49a | Hello from the root path! |
| stdout | 6e40d49a | 2nd println in this request |
| stdout | 6e40d49a | 3rd println in this request |
| stdout | 6e40d49a | 4th println in this request |
| stdout | 6e40d49a | 5th println in this request |

What’s next?

You hear us talk about the edge a lot — and Compute@Edge continues to expose fundamentally new ways to build there with the tools you want and need. This is just one of many enhancements we have ahead. Sign up to receive updates for future Compute@Edge news here

Want to continue the conversation?Schedule time with an expert
Share this post