---
title: Caching content with Fastly
summary: null
url: https://www.fastly.com/documentation/guides/concepts/cache
---


The Fastly edge cache is an enormous pool of storage across the platform's network which allows you to satisfy end user requests with exceptional performance and reduce the need for requests to your backend servers.

Most use cases make use of the [readthrough cache interface](#readthrough-cache), which works automatically with the HTTP requests that transit your Fastly service to save responses in cache so they can be reused. The first time a cacheable resource is requested at a particular [POP](/guides/concepts/pop), the resource will be requested from your backend server and stored in cache automatically. Subsequent requests for that resource can then be satisfied from cache without having to be forwarded to your servers.

Other cache interfaces, such as [simple](#simple-cache) and [core](#core-cache), offer direct access to the shared cache layer from your own code and are exclusively available in [Compute](/guides/compute) services.

Use cases for caching vary hugely, and while some might be very simple, others benefit from some clever features built in to the cache mechanism:

* [**HTTP caching semantics**](/guides/concepts/cache/cache-freshness), such as the `Cache-Control` header, are part of the [HTTP Caching standard (RFC 9111)](https://httpwg.org/specs/rfc9111.html) and allow HTTP responses to include information that the Fastly cache uses to decide how they should be cached.
* [**Request collapsing**](/guides/concepts/cache/request-collapsing) allows us to identify multiple simultaneous requests for the same resource, and make just one backend fetch for it, using the resulting response to populate the cache and satisfy all waiting clients.
* [**Range collapsing**](#ranged-requests) merges requests for separate byte ranges of a backend object into a single backend fetch for the entire object. The resulting response is used to populate the cache and fulfill future client requests for any byte range of the object, as well as to manage the lifetime for the entire object as a whole.
* [**Streaming miss**](/guides/concepts/cache/request-collapsing/#collapsing-during-response-phase) writes a response stream to cache and to an end user at the same time.
* [**Client revalidation**](#conditional-requests) handles conditional headers received from a client to indicate whether its cached copy is still valid, allowing us to send the response body only if necessary. In addition, depending on the state of the object in the cache, this process potentially forwards the request to the backend to add or refresh the cached object.
* [**Backend revalidation**](#backend-revalidation) adds conditional headers to a client request as it is forwarded to a backend, when the request is for a cached object that is stale. If the backend validates that the content in the cache is still good to use, it can instruct the cache to extend the lifetime of the object in place without having to send its contents again.
* [**Purging**](/guides/concepts/cache/purging/) allows cache entries to be expunged ahead of their normal expiry, so that changes to the source content can be reflected at the edge immediately.

Balancing the benefits of these caching features with the desire for simplicity is the reason [multiple different cache interfaces](#interfaces) are offered (although they all result in objects being stored in the [same cache layer](#interoperability)).

> **IMPORTANT:** All data stored in the Fastly cache is *ephemeral*: it will expire, and may be evicted by the platform before it expires depending on how frequently it is used. If you require persistent storage at the edge, consider using dynamic configuration like [dictionaries](/guides/full-site-delivery/dictionaries/about-dictionaries), [access control lists](/guides/security/access-control-lists/about-acls/), or [data stores](/guides/compute/edge-data-storage/about-edge-data-stores) instead.

<Partial name="http-cache-api-availability" />

## Interfaces

The Fastly cache is available through a **readthrough** interface (HTTP cache interface) built into the fetch mechanism. In Compute, it is additionally available using explicit calls to the **simple** or **core** cache interfaces.

|           | Readthrough (HTTP cache)       | Simple       | Core       |
|-----------|-------------------|--------------|------------|
| Overview | Automatically caches HTTP responses based on [HTTP caching semantics](/guides/concepts/cache/cache-freshness) as requests traverse through Fastly's network to a backend. | Offers a straightforward `getOrSet` method for programmatic access to the cache for simple use cases. | Offers programmatic access to the cache with full control over all cache metadata and semantics, intended for advanced use cases or building custom higher level abstractions. |
| Platform | VCL and Compute | Compute | Compute |
| Use it for... | Automatic caching | Simple key-value caching | Complex requirements |
| Cache freshness | HTTP semantics | Explicit | Explicit |
| Request collapsing | Heuristic | Always-on | Manual control |
| Range collapsing | ✅ (automatic) | ❌ | ✅ (manual) |
| Streaming miss | ✅ (automatic) | ❌ | ✅ (manual) |
| Client revalidation | ✅ (automatic) | ❌ | ✅ (manual) |
| Backend revalidation | ✅ (automatic) | ❌ | ✅ (manual) |
| Surrogate keys | ✅ | ❌ | ✅ |
| Purging | ✅ | ✅ | ✅ |

These interfaces all access the same underlying storage layer, and use the same address space. See [interoperability](#interoperability) for details.

## Readthrough (HTTP) cache

<div id="readthrough-cache" />

The readthrough cache (HTTP cache) is the Fastly cache interface your services are most likely to use. In both [VCL](/guides/full-site-delivery/fastly-vcl) and [Compute](/guides/compute) services, the readthrough interface is enabled by default and invoked every time you make a request to backend from your edge application. It is the only cache interface available to VCL services.

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, the readthrough interface works without any configuration or code required.

</Panel>
<Panel id="rust">

In a Compute service, you include code to send a request to a backend. Calling `Request::send()` and specifying the name of a backend will invoke the readthrough cache:

```rust
use fastly::{Error, Request, Response};

#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
    Ok(req.send("my_backend_name")?)
}
```

</Panel>
<Panel id="javascript">

In a Compute service, you include code to send a request to a backend. Calling `fetch()` and specifying the name of a backend will invoke the readthrough cache:

```js
/// <reference types="@fastly/js-compute" />

addEventListener("fetch", event => event.respondWith(handler(event)));

function handler(event) {
  return fetch(event.request, { backend: "my_backend_name" });
}
```

</Panel>
<Panel id="go">

In a Compute service, you include code to send a request to a backend. Calling `func (*Request) Send()` and specifying the name of a backend will invoke the readthrough cache:

```go
package main

import (
	"context"
	"fmt"
	"io"

	"github.com/fastly/compute-sdk-go/fsthttp"
)

func main() {
	fsthttp.ServeFunc(func(ctx context.Context, w fsthttp.ResponseWriter, r *fsthttp.Request) {
		resp, err := r.Send(ctx, "my_backend_name")
		if err != nil {
			w.WriteHeader(fsthttp.StatusBadGateway)
			fmt.Fprintln(w, err.Error())
			return
		}

		w.Header().Reset(resp.Header)
		w.WriteHeader(resp.StatusCode)
		io.Copy(w, resp.Body)
	})
}
```

</Panel>
 -->

The readthrough cache interface understands [HTTP caching semantics](/guides/concepts/cache/cache-freshness) and seamlessly supports [request collapsing](/guides/concepts/cache/request-collapsing) (based on cacheability of the object), [range collapsing](#ranged-requests), [streaming miss](/guides/concepts/cache/request-collapsing/#streaming-miss), [revalidating client requests](#conditional-requests), [revalidating cached content with the backend](#backend-revalidation), and [purging](/guides/concepts/cache/purging).

> **IMPORTANT:** The behavior of Fastly's readthrough cache is based on the [HTTP Caching standard (RFC 9111)](https://httpwg.org/specs/rfc9111.html), with some exceptions. Refer to [Divergences from RFC 9111](/guides/concepts/cache/cache-freshness/#divergences-from-rfc-9111) for details.

### Automatic request transformations

The readthrough cache is designed for HTTP and performs the following transformations automatically to make cache usage simple and efficient.

* Ranged requests

  <div id="ranged-requests"></div>

  [Ranged requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests) are requests that use headers to ask the server to return a desired byte range of an object's body, rather than the entire body. If the server honors the range, it will return a response that has the status `206 Partial Content` and that contains only the specified part of the body.

  The readthrough cache interface _automatically transforms a ranged request into a request for the entire body_ when forwarding it to the backend, so that the cache always contains the entire object. The cached response is then transformed back into a `206 Partial Content` response containing the requested part of the body before returning it to the client. Subsequent requests for the entire object or for a range (even a different range) of the object is fulfilled by this cached object, transforming it for each range as necessary.

  This enables the cache to manage lifetime [according to HTTP caching semantics](/guides/concepts/cache/cache-freshness) for the entire object as a single entity. Additionally, this enables multiple simultaneous requests to same or different parts of a single object to be collapsed using the [request collapsing](/guides/concepts/cache/request-collapsing) mechanism.

* Conditional (aka revalidation) requests

  <div id="conditional-requests"></div>

  [Conditional requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Conditional_requests) are requests that use headers to ask the server to either "revalidate" an existing response (e.g. by reporting that the object hasn’t been modified since it was given), or else to return a normal response. If the object is still valid, the server is expected to return a response with a `304 Not Modified` code without a body and include updated headers which can be used to freshen a cached response, including updating its effective TTL.

  The readthrough cache interface _automatically processes conditional requests from the client_ by first treating them as normal requests, potentially bringing response data into the cache. Afterward, the readthrough cache attempts a revalidation against the response's validators to determine whether to return a `304 Not Modified` or a normal response to the client.

* Requests for stale objects

  <div id="backend-revalidation"></div>

  On the other hand, if a requested object exists in the Fastly cache as a stale object, the readthrough cache will _automatically issue conditional requests to a backend_ when possible to update (revalidate) its stored copy. In response, the backend can avoid transferring the body if it has not changed, instructing us to update just the headers and lifetime of the cached object in-place.

  When within a `stale-while-revalidate` period, these backend revalidations may be issued asynchronously, immediately returning the stale object for client use.

  See [revalidation](/guides/concepts/cache/stale#revalidation) for details.

> **IMPORTANT:** Note that client revalidation and backend revalidation occur independently:
> * A client request may or may not be a conditional request.
> * The request may or may not be fulfilled by the Fastly cache. If it is not fulfilled by the cache (i.e., it is for a missing or stale object), the readthrough cache interface forwards it to the backend, in order to add or update the cached object in the cache.
>    * At this time, if the object has a **validator** (an `ETag` or `Last-Modified` header), the cache interface automatically adds conditional headers to the forwarded request. If the backend is able to revalidate it, it can update the headers and lifetime of the cache object without sending the body again.
> * Regardless of whether the readthrough cache forwards the request to the backend, it responds to the client request:
>    * If the client request was a conditional request, and the content in the readthrough cache is able to revalidate it, then it returns to the client a revalidation response that does not include the body.
>    * Otherwise, the readthrough cache returns to the client a normal response that includes the body.

Automatically interpreting and issuing ranged and conditional requests are significant benefits provided by an HTTP-specific caching interface.

### Controlling cache behavior

The readthrough cache is a good starting point for most caching use cases. There is no explicit read/write method, but it's possible to control the caching behavior in a number of ways, described below.

#### **Bypassing the readthrough cache for a request**

You can mark requests to ensure they are not served from cache.

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, `return(pass)` from `vcl_recv` or `vcl_miss`.

</Panel>
<Panel id="rust">

In a Compute service written in Rust, use the `Request::set_pass()` method.

```rust
use fastly::{Error, Request, Response};

#[fastly::main]
fn main(mut req: Request) -> Result<Response, Error> {
    req.set_pass(true);
    Ok(req.send("my_backend_name")?)
}
```

</Panel>
<Panel id="javascript">

In a Compute service written in JavaScript, construct a `CacheOverride` object with the value `"pass"` and include it in your `fetch()` call.

```js
/// <reference types="@fastly/js-compute" />


addEventListener("fetch", event => event.respondWith(handler(event)));

function handler(event) {
  return fetch(event.request, {
    backend: "my_backend_name",
    cacheOverride: new CacheOverride("pass"),
  });
}
```

</Panel>
<Panel id="go">

In a Compute service written in Go, the `Request` object has a `CacheOptions` field. Set the value of the `Pass` field of this object to `true`.

```go
r.CacheOptions.Pass = true
resp, err := r.Send(ctx, "my_backend_name")
```

</Panel>
 -->

> **IMPORTANT:** a request marked to bypass the cache will also bypass [automatic request transformations](#automatic-request-transformations) for that request.

#### **Setting cache policy on a request**

It's possible to explicitly set a TTL when making a request in Compute services, but not in VCL services.

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, it's only possible to set a TTL when the response is received. See [Setting cache policy on a response](#setting-cache-policy-on-a-response) below.

</Panel>
<Panel id="rust">

In a Compute service written in Rust, use `Request::set_ttl()`.

```rust
use fastly::{Error, Request, Response};

#[fastly::main]
fn main(mut req: Request) -> Result<Response, Error> {
    req.set_ttl(60);
    Ok(req.send("my_backend_name")?)
}
```

</Panel>
<Panel id="javascript">

In a Compute service written in JavaScript, construct a `CacheOverride` object, specifying the `ttl` property in the constructor parameter.

```js
/// <reference types="@fastly/js-compute" />


addEventListener("fetch", event => event.respondWith(handler(event)));

function handler(event) {
  return fetch(event.request, {
    backend: "my_backend_name",
    cacheOverride: new CacheOverride({ ttl: 60 }),
  });
}
```

</Panel>
<Panel id="go">

In a Compute service written in Go, set the `TTL` field on the `CacheOptions` field.

```go
r.CacheOptions.TTL = 60
resp, err := r.Send(ctx, "my_backend_name")
```

</Panel>
 -->

#### **Controlling the cache key**

By default, the readthrough cache for both VCL and Compute services uses a combination of the request URL (including the path and query) and `Host` header as its cache key to create unique HTTP objects. Headers are taken into account according to any `Vary` rules, as a kind of secondary cache key.

For example, the following URLs will cause distinct objects to be cached:

  * https&#58;//www\.example.com/hello.html
  * https&#58;//example.com/hello.html
  * https&#58;//example.com/hello.html?foo=42

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, you can manipulate the cache key explicitly by adding a request setting via the web interface. For more information, check out our guide to [manipulating the cache key](/guides/full-site-delivery/caching/manipulating-the-cache-key).

</Panel>
<Panel id="rust">

In a Compute service written in Rust, you can manipulate the cache key explicitly by using `Request::set_cache_key()`.

```rust
use fastly::{Error, Request, Response};

#[fastly::main]
fn main(mut req: Request) -> Result<Response, Error> {
    req.set_cache_key("custom_cache_key");
    Ok(req.send("my_backend_name")?)
}
```

</Panel>
<Panel id="javascript">

In a Compute service written in JavaScript, you can manipulate the cache key explicitly by setting the `cacheKey` field when performing the backend fetch.

```js
/// <reference types="@fastly/js-compute" />

addEventListener("fetch", event => event.respondWith(handler(event)));

function handler(event) {
  return fetch(event.request, {
    backend: "my_backend_name",
    cacheKey: "custom_cache_key",
  });
}
```

</Panel>
<Panel id="go">

In a Compute service written in Go, you can manipulate the cache key explicitly by setting the `OverrideKey` field of `CacheOptions` when performing the backend fetch.

```go
r.CacheOptions.OverrideKey = "custom_cache_key"
resp, err := r.Send(ctx, "my_backend_name")
```

</Panel>
 -->

#### **Setting cache policy on a response**

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In a VCL service, set `beresp.ttl` in `vcl_fetch` to adjust the cache lifetime of a response, or use the `beresp.http.Surrogate-Key` header to add surrogate keys to the response.

</Panel>
<Panel id="compute-services">

In Compute services, use an [after-send callback](#controlling-cache-behavior-based-on-backend-response) to configure cache policy based on a response.

</Panel>
 -->

#### **Knowing a response object's cache state**

<div id="knowing-whether-a-response-is-coming-from-cache-or-network"></div>

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, the `vcl_hit` or `vcl_miss` subroutines are invoked based on the cache result.

In addition, the platform reports the cache state in `fastly_info.state`, and sets the `X-Cache` response header before the `vcl_deliver` subroutine is invoked.

In a VCL service, use the following to learn about an object's cache state:

* `obj.ttl`
* `obj.hits`
* `obj.stale_while_revalidate`
* `obj.stale_if_error`

</Panel>
<Panel id="rust">

In a Compute service written in Rust, check the `X-Cache` HTTP header of the response.

> **NOTE:**
> * For request collapsing, one response will show as a miss, and the others (those that waited for that response) will show as hits.

Use the following to learn about an object's cache state:

* `Response::get_ttl(&self) -> Option<Duration>`
* `Response::get_age(&self) -> Option<Duration>`
* `Response::get_stale_while_revalidate(&self) -> Option<Duration>`

> **NOTE:**
> * The value returned by `get_ttl()` will reflect the _current_ TTL value from the viewpoint of the cache.
> * If the Response does not come from a cached entry, these methods return `None`. This can happen in the following cases:
>    * The Response comes from a request that bypasses the cache (i.e., with `Request::set_pass()`).
>    * The Response is synthetic (i.e., not returned from `Request::send()`).

</Panel>
<Panel id="javascript">

In a Compute service written in JavaScript, read the following properties to learn about an object's cache state:

* `resp.cached -> boolean | undefined` - Whether the `Response` resulted from a cache hit

> **NOTE:**
> * Alternatively, read the `X-Cache` HTTP response header: `HIT` if the `Response` resulted from a cache hit, or `MISS` otherwise.
> * For request collapsing, one response will show as a miss, and the others (those that waited for that response) will show as hits.

* `resp.stale -> boolean | undefined` - Whether the cached `Response` is considered stale
* `resp.ttl -> number | undefined` - Time to Live (TTL) in the cache for this response in seconds, reflecting the _current_ TTL value from the viewpoint of the cache
* `resp.age -> number | undefined` - The current age of the response in seconds
* `resp.swr -> number | undefined` - Time in seconds for which the response can safely be used despite being considered stale
* `resp.vary -> string[] | undefined` - The set of request headers for which the response may vary
* `resp.surrogateKeys -> string[] | undefined` - The surrogate keys for the cached response

> **NOTE:**
> * If the Response does not come from a cached entry, these properties return `undefined`. This can happen in the following cases:
>    * The Response comes from a request that bypasses the cache (i.e., with `CacheOverride('pass')`)
>    * The Response is synthetic (i.e., not returned from `fetch()`)
> * These properties return `undefined` on hosts that do not support customized caching, such as the [local development server](/guides/compute/developer-guides/testing/#running-a-local-testing-server).

</Panel>
<Panel id="go">

In a Compute service written in Go, use the following to learn about an object's cache state:

* `func (resp *Response) FromCache() bool` - returns whether the response was returned from the cache (true) or fresh from the backend (false).

> **NOTE:**
> * Alternatively, read the `X-Cache` HTTP response header: `HIT` if the `Response` resulted from a cache hit, or `MISS` otherwise.
> * For request collapsing, one response will show as a miss, and the others (those that waited for that response) will show as hits.

Use the following to learn about an object's cache state:

* `func (resp *Response) TTL() (uint32, bool)`
* `func (resp *Response) Age() (uint32, bool)`
* `func (resp *Response) StaleWhileRevalidate() (uint32, bool)`

> **NOTE:**
> * The value returned by `TTL()` will reflect the _current_ TTL value from the viewpoint of the cache.
> * These methods return a value and a boolean indicating whether the Response was served from cache. If not, the boolean will be `false` and the value is undefined. This can happen in the following cases:
>    * The Response comes from a request that bypasses the cache (i.e., `Pass` on `CacheOptions` had been set to `true`).
>    * The Response is synthetic (i.e., not returned from `Request::Send()`).

</Panel>
 -->

For more information on how the readthrough cache determines cache lifetime, see [HTTP caching semantics](/guides/concepts/cache/cache-freshness).

### Customizing cache interaction with the backend

<div id="http-cache"></div>

In a Compute service, the readthrough cache interface can be further customized in the way it interacts with the backend, supporting use cases such as:

* making modifications to a request only when it is forwarded to a backend
* reading and modifying response status code and headers, and adjusting cache controls
* transforming the body of the response that is stored into the cache

> **WARNING:** This feature is not compatible with hosts that do not support customized caching, such as the [local development server](/guides/compute/developer-guides/testing/#features).

<!-- TabbedPanels component: 
<Panel id="rust">

To help you get started with customizing cache behavior, the following starter kit includes working example code and can be used as a starting point for using these features in your application.

* [Advanced caching starter kit for Rust](/solutions/starters/compute-starter-kit-rust-advanced-caching/)

</Panel>
<Panel id="javascript">

<Partial name="http-cache-js-flag" />

To help you get started with customizing cache behavior, the following starter kit includes working example code and can be used as a starting point for using these features in your application.

* [Advanced caching starter kit for JavaScript](/solutions/starters/compute-starter-kit-javascript-advanced-caching/)

</Panel>
<Panel id="go">

<Partial name="http-cache-go-flag" />

To help you get started with customizing cache behavior, the following starter kit includes working example code and can be used as a starting point for using these features in your application.

* [Advanced caching starter kit for Go](/solutions/starters/compute-starter-kit-go-advanced-caching/)

</Panel>
 -->

#### Modifying a request as it is forwarded to a backend

<div id="before-send-callback"></div>

Set a before-send callback to make modifications to a request object only before forwarding it to the backend (i.e., when it is _not_ fulfilled by the cache).

When a send operation is made on the request, the readthrough cache interface invokes the before-send callback if it would forward the request to the backend, after cache lookup has occurred but before forwarding it. If a before-send callback is not set, the default behavior is to make no change to the request before forwarding it.

> **For VCL developers:** The before-send callback is roughly equivalent to a combination of the `vcl_miss` and `vcl_pass` subroutines when the latter is invoked for a miss. It is invoked for both cache misses and revalidations, though not for direct-pass requests.

<!-- TabbedPanels component: 
<Panel id="rust">

To set a **before-send** callback on a request, use the `Request::set_before_send()` method before calling `Request::send()`.

```rust compile_fail
impl Request {
    pub fn set_before_send(
        &mut self,
        before_send: impl Fn(&mut Request) -> Result<(), SendError>
    );
}
```

Example:
```rust compile_fail
client_req.set_before_send(|req| {
  // before-send callback
  Ok(())
});
let backend_resp = client_req.send("example_backend")?;
```

</Panel>
<Panel id="javascript">

To set a **before-send** callback on a request, when calling `fetch()`, set the value of the `cacheOverride` property on the `options` parameter to an instance of `CacheOverride` constructed with the `beforeSend` property of its constructor parameter set to a function.

```typescript
interface CacheOverrideOptions {
  beforeSend(req: Request): void | Promise<void>;
}
```

Example:
```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    async beforeSend(req) {
      // before-send callback
    },
  }),
});
```

</Panel>
<Panel id="go">

To set a **before-send** callback on a request, assign a function to the `BeforeSend` field of a `CacheOptions` struct before calling the `Send` method on the `Request` instance.

```go
type CacheOptions struct {
  // ...other members...
  BeforeSend func(*Request) error
}
```

Example:
```go
r.CacheOptions.BeforeSend = func(r *fsthttp.Request) error {
  // before-send callback
  return nil
}
resp, err := r.Send(ctx, "example_backend")
```

</Panel>
 -->

For example, consider the case the backend requires an additional authorization header but the header value is expensive to produce. A before-send callback can be used to add this header only whenever the request must be forwarded to the backend.

The following example demonstrates the use of a before-send callback to inject an authorization header that is not part of the originating request headers:

<!-- TabbedPanels component: 
<Panel id="rust" full>

```rust compile_fail
let mut client_req = Request::from_client();
client_req.set_before_send(|req| {
    req.set_header(http::header::AUTHORIZATION, prepare_auth_header());
    Ok(())
});
let backend_resp = client_req.send("example_backend")?;
```

</Panel>
<Panel id="javascript" full>

```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    async beforeSend(req) {
      req.headers.set('Authorization', prepareAuthHeader());
    },
  }),
});
```

</Panel>
<Panel id="go" full>

```go
r.CacheOptions.BeforeSend = func(r *fsthttp.Request) error {
  r.Header.Set("Authorization", prepareAuthHeader())
  return nil
}
resp, err := r.Send(ctx, "example_backend")
```

</Panel>
 -->

#### Controlling cache behavior based on backend response

<div id="after-send-callback"></div>
<div id="candidate-response"></div>
<div id="the-candidate-response-object"></div>

Set an after-send callback when you need to modify the status code or headers, adjust cache controls, or transform the body of the response before it is potentially stored into the cache and returned to the client.

When a send operation is made on the request, if the readthrough cache interface forwarded it to the backend (i.e., didn't fulfill it from the cache), then it invokes the after-send callback after a response has been received from the backend, before it is potentially stored into the cache. If an after-send callback is not set, the default behavior is to make no change to the response before storing it into the cache.

When the readthrough cache interface invokes the after-send callback, it passes in a "candidate" response object, representing a form of the response that the readthrough cache would store. By interacting with this object you can adjust cache controls and/or transform the body.

> **For VCL developers:** The after-send callback is roughly equivalent to a `vcl_fetch` subroutine, but unlike with VCL, it is invoked even for successful revalidations.

<!-- TabbedPanels component: 
<Panel id="rust">

To set an **after-send** callback on a request, use the `Request::set_after_send()` method before calling `Request::send()`.

```rust compile_fail
impl Request {
    pub fn set_after_send(
        &mut self,
        after_send: impl Fn(&mut CandidateResponse) -> Result<(), SendError>,
    );
}
```

Example:
```rust compile_fail
client_req.set_after_send(|resp| {
  // after-send callback
  Ok(())
});
let backend_resp = client_req.send("example_backend")?;
```

</Panel>
<Panel id="javascript">

To set an **after-send** callback on a request, when calling `fetch()`, set the value of the `cacheOverride` property on the `options` parameter to an instance of `CacheOverride` constructed with the `afterSend` property of its constructor parameter set to a function.

```typescript
interface CacheOverrideOptions {
  afterSend(resp: Response): void | CacheOptions | Promise<void | CacheOptions>;
}
```

Example:
```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    async afterSend(resp) {
      // after-send callback
    },
  }),
});
```

</Panel>
<Panel id="go">

To set a **after-send** callback on a request, assign a function to the `AfterSend` field of a `CacheOptions` struct before calling the `Send` method on the `Request` instance.

```go
type CacheOptions struct {
  // ...other members...
  AfterSend func(*CandidateResponse) error
}
```

Example:
```go
r.CacheOptions.AfterSend = func(cr *fsthttp.CandidateResponse) error {
  // after-send callback
  return nil
}
resp, err := r.Send(ctx, "example_backend")
```

</Panel>
 -->

##### **Working with the status code and headers**

<!-- TabbedPanels component: 
<Panel id="rust">

The `CandidateResponse` passed into the after-send callback supports an identical interface to `Response` for reading and writing the status code and headers.

* Reading
   * `CandidateResponse::get_status(&self) -> StatusCode`
   * `CandidateResponse::get_header(&self, name: impl ToHeaderName) -> Option<&HeaderValue>`

* Writing
   * `CandidateResponse::set_status(&mut self, status: impl ToStatusCode)`
   * `CandidateResponse::set_header(&mut self, name: impl ToHeaderName, value: impl ToHeaderValue)`

Changes you make to the `CandidateResponse` during the after-send callback are reflected in the response that will be stored in the cache and returned to the client.

> **NOTE:** `CandidateResponse` does not expose the response body for reading or modification. For details, see [Modifying the body that is saved to the cache](#modifying-the-body-that-is-saved-to-the-cache).

</Panel>
<Panel id="javascript">

The `Response` object passed into the after-send callback can be used to read and write the status code and headers.

Changes you make to the status and headers of the `Response` during the after-send callback are reflected in the response that will be stored in the cache and returned to the client.

> **NOTE:** Unlike in other contexts:
> * `response.status` is not `readonly`. This allows you to modify the status code of the response that is stored.
> * `response.body` cannot be read, and methods such as `text()` and `json()` that consume the `body` cannot be used. For details, see [Modifying the body that is saved to the cache](#modifying-the-body-that-is-saved-to-the-cache).

</Panel>
<Panel id="go">

The `CandidateResponse` passed into the after-send callback supports an identical interface to `Response` for reading and writing the status code and headers.

* Reading
   * `func (cr *CandidateResponse) Status() (int, error)`
   * `func (cr *CandidateResponse) Header(key string) (string, error)`

* Writing
   * `func (cr *CandidateResponse) SetStatus(status int) error`
   * `func (cr *CandidateResponse) SetHeader(key string, value string) error`
   * `func (cr *CandidateResponse) DelHeader(key string) error`

Any changes made to the `CandidateResponse` during the after-send callback will be reflected in the final response that is cached and returned to the client.

> **NOTE:** `CandidateResponse` does not expose the response body for reading or modification. For details, see [Modifying the body that is saved to the cache](#modifying-the-body-that-is-saved-to-the-cache).

</Panel>
 -->

##### **Manipulating cache controls**

<!-- TabbedPanels component: 
<Panel id="rust">

`CandidateResponse` also supports checking and adjusting cache controls on a backend response before it is inserted into the cache.

* The readthrough cache interprets [HTTP caching headers](/guides/concepts/cache/cache-freshness) on the backend response to determine whether the response will be cached, and makes the result available by calling:

   * `CandidateResponse::is_cacheable(&self) -> bool`

   > **IMPORTANT:**
   > This value is set at the start of the after-send callback, and the decision of whether the response will be cached does not change based on updating cache controls after this point. To override the decision of whether this response will be cached, see the item on `set_cacheable` and `set_uncacheable`, below.

* The cache controls interpreted from the HTTP caching headers are available by calling the following methods:

   * `CandidateResponse::get_ttl(&self) -> Duration`
   * `CandidateResponse::get_stale_while_revalidate(&self) -> Duration`
   * `CandidateResponse::get_vary(&self) -> impl Iterator<Item = &str>`
   * `CandidateResponse::get_surrogate_keys(&self) -> Vec<String>`

* Cache controls on the response that is potentially stored to the cache and returned to the caller can be modified either by calling the following methods or by directly modifying the HTTP caching headers (using `CandidateResponse::set_header()`).

   * `CandidateResponse::set_ttl(&mut self, override: Duration)`
   * `CandidateResponse::set_stale_while_revalidate(&mut self, override: Duration)`
   * `CandidateResponse::set_vary(&mut self, override: impl IntoIterator<Item = &'a HeaderName)`
   * `CandidateResponse::set_surrogate_keys(&mut self, override: impl IntoIterator<Item = &str>)`

> **IMPORTANT:**
> * The `set_*` methods _do not modify the headers_ in any way, but always _override_ their values.
> * The `get_*` methods always return the _effective value_ (i.e., the override if one is set, or the value based on the current headers otherwise).

* To override the decision of whether this response will be cached, call one of the following methods:

   * `CandidateResponse::set_cacheable(&mut self)` - Override the cache behavior and store the object to the cache.
   * `CandidateResponse::set_uncacheable(&mut self, record_uncacheable: bool)` - Override the cache behavior and do not store the object to the cache. If `record_uncacheable` is `true`, store a [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass) object (a marker to disable request collapsing until a cacheable response is returned).

The following example demonstrates the use of an after-send callback to customize caching behavior based on the `Content-Type` returned by the backend:

```rust compile_fail
let mut client_req = Request::from_client();
client_req.set_after_send(|resp| {
    match resp.get_header_str("Content-Type").as_deref() {
        Some("image") => resp.set_ttl(Duration::from_secs(67)),
        Some("text/html") => resp.set_ttl(Duration::from_secs(321)),
        Some("application/json") => resp.set_uncacheable(false),
        _ => resp.set_ttl(Duration::from_secs(2)),
    }
    Ok(())
});
let backend_resp = client_req.send("example_backend")?;
```

The following example demonstrates the use of an after-send callback to store a hit-for-pass object (a marker to disable request collapsing until a cacheable response is returned). This is done by passing `true` for the `record_uncacheable` parameter of `CandidateResponse::set_uncacheable()`:

```rust compile_fail
let mut client_req = Request::from_client();
client_req.set_after_send(|resp| {
    if resp.contains_header("my-private-header") {
        resp.set_uncacheable(true);
    }
    Ok(())
});

let backend_resp = client_req.send("example_backend")?;
```

</Panel>
<Panel id="javascript">

In the after-send callback, `Response` can also be used for checking and adjusting cache controls on a backend response before it is inserted into the cache.

* The cache controls interpreted from the HTTP caching headers are available by reading the following properties. Cache controls on the response that is potentially stored to the cache and returned to the caller can be modified either by setting the properties to new values or by directly modifying the HTTP caching headers (using `resp.headers.set()`).

   > **IMPORTANT:**
   > * Setting the properties on the object _does not modify the headers_ in any way, but always _overrides_ their values.
   > * Reading the properties always returns the _effective value_ (i.e., the override if one is set, or the value based on the current headers otherwise).

   * `resp.ttl: number | undefined`
   * `resp.swr: number | undefined`
   * `resp.vary: string[] | undefined`
   * `resp.surrogateKeys: string[] | undefined`

* To override the decision of whether this response will be cached, return a `CacheOptions` object from the after-send callback, and set the `cache` value to one of the following:

   * `undefined` (or not provided) - Use the default cache behavior (`resp.isCacheable`).
   * `true` (boolean) - Override the cache behavior and store the object to the cache.
   * `false` (boolean) - Override the cache behavior and do not store the object to the cache.
   * `'uncacheable'` (string) - Override the cache behavior and do not store the object to the cache. Additionally, store a [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass) object (a marker to disable request collapsing until a cacheable response is returned).

The following example demonstrates the use of an after-send callback to customize caching behavior based on the `Content-Type` returned by the backend:

```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    afterSend(resp) {
      let cache = undefined;
      switch (resp.headers.get('content-type')) {
      case 'image':
        resp.ttl = 67;
        break;
      case 'text/html':
        resp.ttl = 321;
        break;
      case 'application/json':
        cache = false;
        break;
      default:
        resp.ttl = 2;
      }
      return { cache };
    },
  }),
});
```

The following example demonstrates the use of an after-send callback to store a hit-for-pass object (a marker to disable request collapsing until a cacheable response is returned). This is done by returning an object from the after-send callback whose `cache` value is set to `'uncacheable'`:

```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    afterSend(resp) {
      let cache = undefined;
      if (resp.headers.has('my-private-header')) {
        cache = 'uncacheable';
      }
      return { cache };
    },
  }),
});
```

</Panel>
<Panel id="go">

The `CandidateResponse` passed to the after-send callback also supports inspecting and overriding cache behavior for a backend response before it is inserted into the cache.

* The following methods return the effective caching behavior, as interpreted from HTTP headers or overridden values:

   * `func (cr *CandidateResponse) TTL() (uint32, error)`
   * `func (cr *CandidateResponse) StaleWhileRevalidate() (uint32, error)`
   * `func (cr *CandidateResponse) Vary() (string, error)`
   * `func (cr *CandidateResponse) SurrogateKeys() (string, error)`

* Cache behavior for the response (as stored in the cache and returned to the client) can be modified using these methods, or by setting headers directly via `SetHeader()`:

   * `func (cr *CandidateResponse) SetTTL(ttl uint32)`
   * `func (cr *CandidateResponse) SetStaleWhileRevalidate(swr uint32)`
   * `func (cr *CandidateResponse) SetVary(vary string)`
   * `func (cr *CandidateResponse) SetSurrogateKeys(keys string)`

> **IMPORTANT:**
> - These `Set*` methods **do not** modify headers directly — they override the internal cache behavior.
> - The `TTL()`, `StaleWhileRevalidate()`, etc. methods return the **effective values**, meaning either the override or the value derived from headers.

* To explicitly control whether the response is stored in the cache, use the following methods:

   * `func (cr *CandidateResponse) SetCacheable() error` - Forces the response to be cached, regardless of HTTP headers.

   * `func (cr *CandidateResponse) SetUncacheable() error` - Prevents the response from being cached. This does **not** record a hit-for-pass marker — request collapsing remains enabled.

   * `func (cr *CandidateResponse) SetUncacheableDisableCollapsing() error` - Prevents the response from being cached, and also stores a [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass) marker. This disables request collapsing for subsequent requests until a cacheable response is returned.

The following example demonstrates the use of an after-send callback to customize caching behavior based on the `Content-Type` returned by the backend:

```go
r.CacheOptions.AfterSend = func(cr *CandidateResponse) error {
  contentType, _ := cr.Header("Content-Type")

  switch {
  case contentType == "image":
    cr.SetTTL(67)
  case contentType == "text/html":
    cr.SetTTL(321)
  case contentType == "application/json":
    cr.SetUncacheable()
  default:
    cr.SetTTL(2)
  }

  return nil
}

resp, err := r.Send("example_backend")
```

The following example demonstrates the use of an after-send callback to store a hit-for-pass object (a marker to disable request collapsing until a cacheable response is returned). This is done by calling `SetUncacheableDisableCollapsing()` on the `CandidateResponse` object:

```go
r.Cacheoptions.AfterSend = func(cr *CandidateResponse) error {
  if _, err := cr.Header("my-private-header"); err == nil {
    cr.SetUncacheableDisableCollapsing()
  }
  return nil
}

resp, err := req.Send("example_backend")
```

</Panel>
 -->

##### **Modifying the body that is saved to the cache**

Set a body-transform to modify the body of the response before it is stored into the cache.

When the cache interface receives the response body from the backend, it invokes the body-transform with the backend response body, and the body-transform is responsible for writing out the new body. This transformed body is stored into the cache. If a body-transform is not set, the default behavior is to make no changes to the response body before storing it into the cache.

The readthrough cache interface does not invoke your body-transform during a successful revalidation (`304 Not Modified`) response from the backend. This is because the revalidation response from the backend does not contain a body. See [additional considerations](#additional-considerations) for details.

<!-- TabbedPanels component: 
<Panel id="rust">

In Rust, the **body-transform** is set as a callback function. To set one, use the `CandidateResponse::set_body_transform()` method in an after-send callback.

```rust compile_fail
impl CandidateResponse {
    fn set_body_transform(
        &mut self,
        transform: impl FnOnce(Body, &mut StreamingBody) -> Result<(), SendError>
    );
}
```

When the cache interface receives the response body from the backend, it invokes the callback, passing in the `Body` that contains the response received from the backend and a `StreamingBody` for your callback to use to write out the transformed body.

> **TIP:** Reading the entire body into memory can exhaust the WebAssembly memory space. Therefore, it is best practice to stream bytes out of the `Body` and into the `StreamingBody` when possible.

If you return an error from the callback, the send operation fails without writing into the cache, and the error is used as the return value of the `Request::send()` call.

The following example demonstrates the use of a body-transform callback to transform a response before it is cached. In this example, suppose a function `json_to_html()` exists that builds a templated HTML response from a backend’s JSON API response. The body-transform callback performs the templating when the response is received from the backend, enabling the readthrough cache interface to cache the templated HTML rather than regenerating it for each request:

```rust compile_fail
let mut client_req = Request::from_client();
client_req.set_after_send(|resp| {
    resp.set_content_type(Mime::TEXT_HTML);
    resp.set_body_transform(|body_in, body_out| {
        let json = body_in.into_json()?;
        body_out.append(json_to_html(json));
        Ok(())
    });

    Ok(())
});

// The resulting `resp` will have an HTML body, just like the cached object,
// despite that the backend returned a JSON response:
let resp = client_req.send("example_backend")?;
```

</Panel>
<Panel id="javascript">

In JavaScript, the **body-transform** is set as a callback function. To set one, specify the `bodyTransformFn` value on the `CacheOptions` object that you optionally return from the after-send callback.

When the cache interface receives the response body from the backend, the original backend body is passed in to the transform function, and the function is expected to return the new body.

If you throw an exception from the body-transform, the send operation fails without writing into the cache, and the error is used to reject the promise returned by the `fetch()` call.

The following example demonstrates the use of a body-transform to transform a response before it is cached. In this example, suppose a function `jsonToHtml()` exists that builds a templated HTML response from a backend’s JSON API response. The body-transform performs the templating when the response is received from the backend, enabling the readthrough cache interface to cache the templated HTML rather than regenerating it for each request:

```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    afterSend(resp) {
      resp.headers.set('Content-Type', 'text/html');
      return {
        bodyTransformFn(bytes) {
          const str = new TextDecoder().decode(bytes);
          // jsonToHtml applies a template to generate HTML from JSON
          const html = jsonToHtml(str);
          return new TextEncoder().encode(html);
        },
      };
    },
  }),
});
```

</Panel>
<Panel id="go">

In Go, the **body transform** is set using a callback function. To set one, call the `SetBodyTransform()` method on `CandidateResponse` from within the after-send callback.

```!go
func (cr *CandidateResponse) SetBodyTransform(
  fn func(io.ReadCloser) io.ReadCloser
)
```

When the cache interface receives the response body from the backend, it invokes the callback, passing in an `io.ReadCloser` that contains the backend response. The transform function is expected to return a new `io.ReadCloser` for the transformed body.

> **TIP:** Reading the entire body into memory can exhaust the WebAssembly memory space. Therefore, it is best practice to stream bytes from the body into the result whenever possible.

If the returned `io.ReadCloser` produces an error during reading, the send operation fails, and the response is not cached. The error is returned from the `r.Send()` call.

The following example demonstrates how to use a body transform callback to modify a backend response before it is cached. In this example, a helper function `jsonToHTML()` is used to convert a JSON response into HTML. The body transform performs the conversion when the response is received, allowing the transformed HTML to be cached instead of regenerating it on each request:

```go
r.CacheOptions.AfterSend = func(cr *CandidateResponse) error {
  cr.SetHeader("Content-Type", "text/html")

  cr.SetBodyTransform(func(body io.ReadCloser) io.ReadCloser {
    defer body.Close()

    // Read all input (in real use, prefer streaming!)
    inputBytes, err := io.ReadAll(body)
    if err != nil {
      return errorReader{err}
    }

    html := jsonToHTML(inputBytes)
    return io.NopCloser(strings.NewReader(html))
  })

  return nil
}

// The resulting response will have an HTML body, just like the cached object,
// even though the backend returned a JSON response:
resp, err := req.Send("example_backend")
```

To handle the error, the body-transform callback needs to return an object that implements `io.ReadCloser` that propagates the error, such as:
```go
// errorReader is an io.ReadCloser that always returns a given error on Read().
// Useful to simulate transform failures or propagate upstream errors.
type errorReader struct {
  err error
}
func (e errorReader) Read([]byte) (int, error) {
  return 0, e.err
}
func (e errorReader) Close() error {
  return nil
}
```

</Panel>
 -->

### Additional considerations

* The readthrough cache interface invokes the [before-send](#modifying-a-request-as-it-is-forwarded-to-a-backend) and [after-send](#controlling-cache-behavior-based-on-backend-response) callbacks whenever a send operation causes it to forward a request to the backend. This includes cache misses as well as [revalidations](/guides/concepts/cache/stale#revalidation)&mdash;including background revalidations.

   > **IMPORTANT:** A background revalidation occurs when an object is found in the cache during its [`stale-while-revalidate` lifetime](/guides/concepts/cache/stale#stale-while-revalidate-eliminate-origin-latency). In this case:
   > * The send operation _returns to the caller immediately with a `Response` object that represents the stale content_. Therefore, changes that would be made to the response during an after-send callback will not be reflected in the response returned to the client during this execution.
   > * _At some later time_ before the end of the execution of the application instance, the readthrough cache performs the revalidation by forwarding the request to the backend and updating the cache object, whose process includes invoking before-send and after-send callbacks if they have been set. The exact timing of this is outside the control of your application and should not be relied upon.

   > **IMPORTANT:** [Requests marked to bypass the cache](#bypassing-the-readthrough-cache-for-a-request) skip the readthrough cache entirely, and as a result do not cause the callbacks to be invoked.

* As described above in [Automatic request transformations](#automatic-request-transformations), the readthrough cache interface automatically transforms the request and response in order to make cache usage simple and efficient.

   > **IMPORTANT:** If the client request is a [ranged request](#ranged-requests):
   > * The readthrough cache interface _removes the range headers_ before it forwards the request, which will be reflected in the `Request` passed into the before-send callback if one is set. This enables the readthrough cache interface to retrieve and cache the entire object from the backend, manage its cache lifetime as a single entity, as well as subsequently serve arbitrary ranges from it.
   > * Thus, the readthrough cache interface receives a response from the backend that includes the _entire body_, which will be reflected in the `CandidateResponse` passed into the after-send callback if one is set. The readthrough cache interface further transforms the response to return appropriate `Response` objects for both ranged- and non-ranged requests for this cache object.

   > **IMPORTANT:** If the client request is for a [cached object that is stale](/guides/concepts/cache/stale#revalidation), and the object has a **validator** (an `ETag` or `Last-Modified` header):
   > * The readthrough cache interface _adds conditional headers_ before it forwards the request, which will be reflected in the `Request` passed into the before-send callback if one is set. This enables the readthrough cache interface to automatically attempt to revalidate the response with the backend, rather than re-retrieve it.
   > * If the backend successfully revalidates the response (status code is `304 Not Modified`): The readthrough cache interface invokes the after-send callback if one is set, but _it does not invoke the body-transform_. The `CandidateResponse` represents a complete response, i.e., it has status code `200 OK`, and its headers and cache policy reflect updates specified by the backend response (for example, to extend an object's lifetime). These values are merged with any changes you may make during the after-send callback, and then used to update the cached response object "in-place", without changing the cached response body.
   > * If the backend does not revalidate the response (status is other than `304 Not Modified`): The readthrough cache interface treats the response _as it would any other non-conditional request_, invoking the after-send callback and [body-transform](#modifying-the-body-that-is-saved-to-the-cache) if set, then storing the resulting response to the cache.

   This design enables the readthrough cache to internally manage the complexities of ranges and revalidation, allowing the developer to provide a single code path without needing to think about ranges or revalidation at all.

## Simple cache

Often you may want to cache data directly from your Compute application in a simple, volatile key-value store, and do not require any of the more complex mechanisms supported by the readthrough cache. For example, if you want to cache the state required to resume an authentication flow, or flags that have been set for A/B testing in a session, a straightforward get/set interface is ideal.

Simple cache operations have always-on request collapsing, so if two operations attempt to populate the same cache key at the same time, the setter callback will only be executed once. However, values are treated as opaque data with no headers or metadata. This means simple cache does not support staleness, revalidation, or variation.

For complete documentation on the simple cache interface, refer to the reference for the [Compute SDK](/reference/compute/sdks) of your choice.

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, the simple cache interface is not available.

</Panel>
<Panel id="rust">

In a Compute service written in Rust, the simple cache is supported via the [`fastly:cache:simple`](https://docs.rs/fastly/latest/fastly/cache/simple/index.html) module.

```rust
use {
    fastly::{
        cache::simple::{get_or_set_with, CacheEntry},
        mime, Body, Error, Request, Response,
    },
    std::{thread, time::Duration},
};

#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
    let path = req.get_path().to_owned();
    let value = get_or_set_with(path.clone(), || {
        Ok(CacheEntry {
            value: expensive_render_operation(&path),
            ttl: Duration::from_secs(60),
        })
    })
    .unwrap()
    .expect("closure always returns `Ok`, so we have a value");

    Ok(Response::from_body(value).with_content_type(mime::TEXT_PLAIN_UTF_8))
}

fn expensive_render_operation(path: &str) -> Body {
    // expensive/slow function which constructs and returns the contents for a given path
    thread::sleep(Duration::from_secs(1));
    return path.into();
}
```

</Panel>
<Panel id="javascript">

In a Compute service written in JavaScript, the simple cache is supported via the `SimpleCache` object exported from the [`fastly:cache`](https://js-compute-reference-docs.edgecompute.app/docs/fastly:cache/SimpleCache/) module.

```js
/// <reference types="@fastly/js-compute" />



addEventListener('fetch', event => event.respondWith(app(event)));

async function app(event) {
  const path = new URL(event.request.url).pathname;
  const content = SimpleCache.getOrSet(path, async () => {
    return {
      value: await expensiveRenderOperation(path),
      ttl: 60
    }
  });
  return new Response(content, {
    headers: {
      'content-type': 'text/plain;charset=UTF-8'
    }
  });
}

async function expensiveRenderOperation(path) {
  // expensive/slow function which constructs and returns the contents for a given path
  await new Promise(resolve => setTimeout(resolve, 10000));
  return path;
}
```

</Panel>
<Panel id="go">

In a Compute service written in Go, the simple cache is supported via the [`cache/simple`](https://pkg.go.dev/github.com/fastly/compute-sdk-go/cache/simple) package.

```go
package main

import (
  "context"
  "io"
  "strings"
  "time"

  "github.com/fastly/compute-sdk-go/cache/simple"
  "github.com/fastly/compute-sdk-go/fsthttp"
)

func main() {
  fsthttp.ServeFunc(func(ctx context.Context, w fsthttp.ResponseWriter, r *fsthttp.Request) {

    rc, err := simple.GetOrSet([]byte(r.URL.Path), func() (simple.CacheEntry, error) {
      return simple.CacheEntry{
        Body: render(r.URL.Path),
        TTL:  time.Minute,
      }, nil
    })
    if err != nil {
      fsthttp.Error(w, err.Error(), fsthttp.StatusInternalServerError)
      return
    }
    defer rc.Close()

    w.Header().Set("Content-Type", "text/plain")
    io.Copy(w, rc)
  })
}

func render(path string) *strings.Reader {
  time.Sleep(10 * time.Second)
  return strings.NewReader(path)
}
```

</Panel>
 -->

## Core cache

Core cache is a low-level interface available in Compute services. It offers the primitive operations required to implement high-performance cache applications with all the same advanced features available from the readthrough cache, but gives you complete control of them.

Items cached via this interface consist of:

- **A cache key:** up to 4KiB of arbitrary bytes that identify a cached item. The cache key may not uniquely identify an item; headers can be used to augment the key when multiple items are associated with the same key. See [LookupBuilder::header()](https://docs.rs/fastly/latest/fastly/cache/core/struct.LookupBuilder.html#method.header) in the Rust SDK documentation for more details.
- **General metadata**, such as expiry data (item age, when to expire, and surrogate keys for purging).
- **User-controlled metadata:** arbitrary bytes stored alongside the cached item contents that can be updated when revalidating the cached item.
- **The object itself:** arbitrary bytes read via Body and written via StreamingBody.

For complete documentation on the core cache interface, refer to the reference for the [Compute SDK](/reference/compute/sdks) of your choice.

<!-- TabbedPanels component: 
<Panel id="vcl">

In a VCL service, the core cache interface is not available.

</Panel>
<Panel id="rust">

In a Compute service written in Rust, in the simplest cases, the top-level [`insert`](https://docs.rs/fastly/latest/fastly/cache/core/fn.insert.html) and [`lookup`](https://docs.rs/fastly/latest/fastly/cache/core/fn.lookup.html) functions are used for one-off operations on a cached item, and are appropriate when request collapsing and revalidation capabilities are not required.

The core cache also supports more complex uses via the concept of a "transaction", which can collapse concurrent lookups to the same item, including coordinating revalidation. The following example demonstrates a lookup/insert cache transaction:

```rust compile_fail
const TTL: Duration = Duration::from_secs(3600);
// perform the lookup
let lookup_tx = Transaction::lookup(CacheKey::from_static(b"my_key"))
    .execute()
    .unwrap();
if let Some(found) = lookup_tx.found() {
    // a cached item was found; we use it now even though it might be stale,
    // and we'll revalidate it below
    use_found_item(&found);
}
// now we need to handle the "must insert" and "must update" cases
if lookup_tx.must_insert() {
    // a cached item was not found, and we've been chosen to insert it
    let contents = build_contents();
    let (mut writer, found) = lookup_tx
        .insert(TTL)
        .surrogate_keys(["my_key"])
        .known_length(contents.len() as u64)
        // stream back the object so we can use it after inserting
        .execute_and_stream_back()
        .unwrap();
    writer.write_all(contents).unwrap();
    writer.finish().unwrap();
    // now we can use the item we just inserted
    use_found_item(&found);
} else if lookup_tx.must_insert_or_update() {
    // a cached item was found and used above, and now we need to perform
    // revalidation
    let revalidation_contents = build_contents();
    if let Some(stale_found) = lookup_tx.found() {
        if should_replace(&stale_found, &revalidation_contents) {
            // use `insert` to replace the previous object
            let mut writer = lookup_tx
                .insert(TTL)
                .surrogate_keys(["my_key"])
                .known_length(revalidation_contents.len() as u64)
                .execute()
                .unwrap();
            writer.write_all(revalidation_contents).unwrap();
            writer.finish().unwrap();
        } else {
            // otherwise update the stale object's metadata
            lookup_tx
                .update(TTL)
                .surrogate_keys(["my_key"])
                .execute()
                .unwrap();
        }
    }
}
```

</Panel>
<Panel id="javascript">

In a Compute service written in JavaScript, the core cache is supported via the `CoreCache` object exported from the [`fastly:cache`](https://js-compute-reference-docs.edgecompute.app/docs/fastly:cache/SimpleCache/) module.

In the simplest cases, the [`CoreCache.insert`](https://js-compute-reference-docs.edgecompute.app/docs/fastly:cache/CoreCache/insert) and [`CoreCache.lookup`](https://js-compute-reference-docs.edgecompute.app/docs/fastly:cache/CoreCache/lookup) functions are used for one-off operations on a cached item, and are appropriate when request collapsing and revalidation capabilities are not required.

The core cache also supports more complex uses via the concept of a "transaction", which can collapse concurrent lookups to the same item, including coordinating revalidation. The following example demonstrates a lookup/insert cache transaction:

```javascript


addEventListener("fetch", event => event.respondWith(handleRequest(event)));

async function handleRequest(event) {
    const path = (new URL(event.request.url)).pathname;
    const entry = CoreCache.transactionLookup(path);
    if (entry.state().mustInsertOrUpdate()) {
        const [writer, reader] = entry.insertAndStreamBack({
            maxAge: 60 * 1000
        });
        writer.append(`hello from ${path}`);
        writer.close();
        return new Response(reader.body(), {
            headers: {
                "x-cache": "MISS",
                "x-cache-hits": 0,
            },
        });
    } else {
        return new Response(entry.body(), {
            headers: {
                "x-cache": "HIT",
                "x-cache-hits": entry.hits(),
            },
        });
    }
}
```

</Panel>
<Panel id="go">

In a Compute service written in Go, in the simplest cases, the [`core.Insert`](https://pkg.go.dev/github.com/fastly/compute-sdk-go/cache/core#Insert) and [`core.Lookup`](https://pkg.go.dev/github.com/fastly/compute-sdk-go/cache/core#Lookup) functions are used for one-off operations on a cached item, and are appropriate when request collapsing and revalidation capabilities are not required.

The core cache also supports more complex uses via the concept of a "transaction", which can collapse concurrent lookups to the same item, including coordinating revalidation. The following example demonstrates a lookup/insert cache transaction:

```go
import (
  "time"

  "github.com/fastly/compute-sdk-go/cache/core"
)

func ExampleTransaction() {
  // Users of the transactional API should at a minimum anticipate
  // lookups that are obligated to insert an object into the cache,
  // and lookups which are not. If the stale-while-revalidate
  // parameter is set for a cached object, the user should also
  // distinguish between the insertion and revalidation cases.

  useFoundItem := func(f *core.Found) {
    // Do something with the found item
  }

  buildContents := func() []byte {
    // Build the contents of the cached item
    return []byte("hello world!")
  }

  shouldReplace := func(f *core.Found, contents []byte) bool {
    // Determine whether the cached item should be replaced with
    // the new contents
    return true
  }

  tx, err := core.NewTransaction([]byte("my_key"), core.LookupOptions{})
  if err != nil {
    panic(err)
  }
  defer tx.Close()

  // f is a core.Found value, representing a found cache item.
  // core.ErrNotFound is returned if the item is not cached.
  f, err := tx.Found()
  switch err {
  case nil:
    // A cached item was found, though it might be stale.
    useFoundItem(f)

    // Perform revalidation, if necessary.
    if tx.MustInsertOrUpdate() {
      contents := buildContents()
      if shouldReplace(f, contents) {
        // Use Insert to replace the previous object
        w, err := tx.Insert(core.WriteOptions{
          TTL:           time.Hour,
          SurrogateKeys: []string{"my_key"},
          Length:        uint64(len(contents)),
        })
        if err != nil {
          panic(err)
        }

        if _, err := w.Write(contents); err != nil {
          panic(err)
        }

        if err := w.Close(); err != nil {
          panic(err)
        }
      } else {
        // Otherwise update the stale object's metadata
        if err := tx.Update(core.WriteOptions{
          TTL:           time.Hour,
          SurrogateKeys: []string{"my_key"},
        }); err != nil {
          panic(err)
        }
      }
    }

  case core.ErrNotFound:
    // The item was not found.
    if tx.MustInsert() {
      // We've been chosen to insert the object.
      contents := buildContents()
      w, f, err := tx.InsertAndStreamBack(core.WriteOptions{
        TTL:           time.Hour,
        SurrogateKeys: []string{"my_key"},
        Length:        uint64(len(contents)),
      })
      if err != nil {
        panic(err)
      }

      if _, err := w.Write(contents); err != nil {
        panic(err)
      }

      if err := w.Close(); err != nil {
        panic(err)
      }

      useFoundItem(f)
    } else {
      panic(err)
    }

  default:
    // An unexpected error
    panic(err)
  }
}
```

</Panel>
 -->

## Interoperability

Whether you use the readthrough cache, simple cache or core cache interfaces, data is stored in the same namespace, but interoperability is currently limited to the following:

- The core cache interface can read and overwrite objects inserted via the simple cache interface.
- Simple cache can read (but cannot overwrite existing) objects inserted using core cache but provides only the body of the object.
- The readthrough cache interface is not interoperable with other cache interfaces and cannot read data written through another interface, nor can it write data that is visible to other cache interfaces.

Interoperability also affects [purging](/guides/concepts/cache/purging/).

## Limitations and constraints

The following limitations apply to all cache features:

- Variants (created explicitly in the core cache interface or when the readthrough cache processes the `Vary` HTTP response header) are limited differently depending on platform.

  <!-- TabbedPanels component: 
  <Panel id="vcl">

  In VCL services, the number of variants is limited to 50 per cache object, regardless of the number of `Vary` rule permutations.

  </Panel>
  <Panel id="compute-services">

  In Compute services, the number of variants is not limited but the number of distinct vary rules is limited to 8 per cache object.

  </Panel>
   -->

- In the core cache interface, write operations target the primary storage node for that cache address only. If an existing object is overwritten, replicated copies may continue to be returned by subsequent reads until their TTL expires or the object is purged.
- Purges are asynchronous while writes are synchronous, so performing a purge immediately before a write may result in a race condition in which the purge may clear the primary instance of the cached data *after* the write has completed.

Use of Fastly services is also subject to general limitations and constraints which are platform specific: for more information, refer to separate limits for [VCL services](/reference/vcl/constraints-and-limitations/) and for [Compute services](/guides/compute/getting-started-with-compute/#limitations-and-constraints).
