---
title: HTTP caching semantics
summary: null
url: https://www.fastly.com/documentation/guides/concepts/cache/cache-freshness
---


One of the most common uses of the Fastly edge cache is to store HTTP resources, such as webpages, JavaScript, CSS, images, and video. The [HTTP Caching standard (RFC 9111)](https://httpwg.org/specs/rfc9111.html) describes how to store a response associated with a request and reuse the stored response for subsequent requests.

Fastly's [readthrough cache](/guides/concepts/cache#readthrough-cache) interface interprets and processes the instructions encoded into HTTP responses. For example, the most common (and best practice) means of controlling cache lifetime is by setting an appropriate `Cache-Control` header on a backend response.

This page describes the amount of time that HTTP resources are cached, and how you can effectively control the caching behavior.

<Partial name="http-cache-api-availability" />

## Response processing

When a response is received from a backend, the readthrough cache interface parses relevant response headers to determine whether it can be cached, and for how long.

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In a VCL service, response processing results can be inspected and overridden during the `vcl_fetch` subroutine, which is executed once the response has been parsed (_unless_ the request is a [revalidation](#stale-objects-and-revalidation)).

</Panel>
<Panel id="compute-services">

In a Compute service, response processing results can be inspected and overridden during the [after-send](/guides/concepts/cache#after-send-callback) callback, which is executed once the response has been parsed (_including_ when the request is a [revalidation](#stale-objects-and-revalidation)).

</Panel>
 -->

### Parsing cache controls

<div id="parsing-cache-semantics"></div>

HTTP responses are parsed for the following cache semantics:

|Property|Parsing logic|Default|
|--------|-------------|-------|
| **Is response cacheable?** | If the fetch is a result of an earlier [explicit pass](#overriding-cache-behavior-on-requests) on the request, then **no**; otherwise<br/>if the fetch is a result of a [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass), then **no**; otherwise<br/>if HTTP status is `200`, `203`, `300`, `301`, `302`, `404`, or `410`, then **yes**;<br /> otherwise **no** | N/A |
| **Cache TTL** | Response headers in order of preference:<br/>`Surrogate-Control: max-age={n}`, otherwise<br/>`Cache-Control: s-maxage={n}`, otherwise<br />`Cache-Control: max-age={n}`, otherwise<br />`Expires: {date}` | 2 min |
| **Stale-while-revalidate TTL** | Response headers in order of preference:<br/>`Surrogate-Control: stale-while-revalidate={n}`, otherwise<br/>`Cache-Control: stale-while-revalidate={n}` | 0 |
| **Stale-if-error TTL** | Response headers in order of preference:<br/>`Surrogate-Control: stale-if-error={n}`, otherwise<br/>`Cache-Control: stale-if-error={n}` | 0 |

For example, an `HTTP 200` (OK) response with no cache-freshness indicators in the response headers _is cacheable_ and will have a TTL of 2 minutes. A `500 Internal Server Error` response with `Cache-Control: max-age=300` is _not cacheable_, because of its HTTP status code, and therefore the 5 minute TTL (300 seconds) indicated in the `Cache-Control` header is irrelevant.

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In a VCL service, once the response has been parsed, the status code, headers received with the response, and cache controls resulting from parsing the response headers are available as VCL variables during `vcl_fetch`:

* `beresp.status`
* `beresp.http.{NAME}`
* `beresp.cacheable`
* `beresp.ttl`
* `beresp.stale_while_revalidate`
* `beresp.stale_if_error`.

</Panel>
<Panel id="compute-services">

In a Compute service, once the response has been parsed, the status code, headers received with the response, and cache controls resulting from parsing the response headers are available during the [after-send](/guides/concepts/cache#after-send-callback) callback.

> **NOTE:** In a Compute service, `stale-if-error` is not supported.

<TabbedPanels syncGroup="languages">
<Panel id="rust">

The following methods are available on the `CandidateResponse` object passed into the after-send callback:

* `CandidateResponse::get_status(&self) -> StatusCode`
* `CandidateResponse::get_header(&self, name: impl ToHeaderName) -> Option<&HeaderValue>`
* `CandidateResponse::is_cacheable(&self) -> bool`
* `CandidateResponse::get_ttl(&self) -> Duration`
* `CandidateResponse::get_stale_while_revalidate(&self) -> Duration`

</Panel>
<Panel id="javascript">

The following properties can be read from the `Response` object passed into the after-send callback:

* `resp.status: number`
* `resp.headers.get(header: string): string | undefined`
* `resp.ttl: number | undefined`
* `resp.swr: number | undefined`

</Panel>
<Panel id="go">

The following methods are available on the `CandidateResponse` object passed into the after-send callback:

* `func (cr *CandidateResponse) Status() (int, error)`
* `func (cr *CandidateResponse) Header(key string) (string, error)`
* `func (cr *CandidateResponse) TTL() (uint32, error)`
* `func (cr *CandidateResponse) StaleWhileRevalidate() (uint32, error)`

</Panel>
 -->

</Panel>
</TabbedPanels>

#### Age

A backend can set the `Age` HTTP response header to indicate that an object has already spent some time in a cache [upstream](/reference/glossary#term-upstream) before being served to Fastly. If the response includes an `Age` header with a positive value, that value will be subtracted from the response's `max-age`, if it has one. If the resulting TTL is negative, it is considered to be zero. If the TTL of a response is derived from an `Expires` header, any `Age` header also present on the response will not affect the TTL calculation.

`Age` does not affect the initial values of `stale-while-revalidate` or `stale-if-error` TTLs. If a response includes a `Cache-Control: max-age=60, stale-while-revalidate=300` and also `Age: 90`, then the object's TTL will be set to 0 (because `Age` is higher than 60) but the separate `stale-while-revalidate` TTL will still be 300 seconds.

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In a VCL service, it's possible to change or remove the `Age` header on the response during the `vcl_fetch` subroutine. However, this will not affect the TTL that the object will receive in the cache, as the TTL will have already been calculated by that point.

If you need to modify the TTL, see [overriding semantics](#overriding-semantics) below.

</Panel>
<Panel id="compute-services">

In a Compute service, it's possible to change or remove the `Age` header on the response during the [after-send callback](/guides/concepts/cache#after-send-callback). However, this will not affect the TTL that the object will receive in the cache, as the TTL will have already been calculated by that point.

If you need to modify the TTL, see [overriding semantics](#overriding-semantics) below.

</Panel>
 -->

Fastly's readthrough cache interface also _sets_ the `Age` header each time it returns a response. Each response receives a new value for the `Age` header, equal to the amount of time that the object has spent in the Fastly cache, plus (if set) the value of the `Age` header on the cached object. This mechanism is used to ensure that objects cached in multiple tiers of the Fastly platform as a result of [shielding](/guides/concepts/shielding/) will not accrue more cache freshness than was originally intended.

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In VCL services, the `Age` header is set in this way just before the response is delivered to the client.

</Panel>
<Panel id="compute-services">

In Compute services, the `Age` header is set in this way when the response is returned from the readthrough cache.

</Panel>
 -->

#### Surrogate control

The `Surrogate-Control: max-age` and `Cache-Control: s-maxage` header directives express a desired TTL for _server-based_ caches (such as Fastly's readthrough cache). Therefore, these will be given preference over `Cache-Control: max-age` when calculating the initial value of the response object's TTL.

Additionally, Fastly will remove any `Surrogate-Control` header before a response is sent to an end user. Fastly does not, however, remove the `s-maxage` directive from any `Cache-Control` header.

> **IMPORTANT:** If your service uses [shielding](/guides/concepts/shielding/), then the 'end user' making the request to the Fastly edge may be another Fastly [POP](/guides/concepts/pop). In this situation Fastly _does not_ strip the `Surrogate-Control` header, so that both [POPs](/guides/concepts/pop) will parse and respect the `Surrogate-Control` instructions.

### Overriding semantics

<!-- TabbedPanels component: 
<Panel id="cdn-services">

During the `vcl_fetch` subroutine, you can affect the caching behavior in a number of ways:

- **Modifying Fastly cache TTL**<br/>
  To change the amount of time the readthrough cache interface will cache an object, override the value of `beresp.ttl`, `beresp.stale_while_revalidate`, and `beresp.stale_if_error`:
  ```vcl
  set beresp.ttl = 300s;
  ```
  > **HINT:** This will override entirely the TTL that Fastly has determined by parsing the response's freshness semantics. If your service uses [shielding](/guides/concepts/shielding/), you may want to subtract `Age` manually. See the `beresp.ttl` docs for more information.

- **Modifying downstream (browser) cache TTL**<br/>
  To change the way that downstream caches (including browsers) treat the resource, override the value of the caching headers attached to the object. Take care if you use [shielding](/guides/concepts/shielding/) since you may also be changing the caching policy of a downstream Fastly cache:
  ```vcl
  if (req.backend.is_origin) {
    set beresp.http.Cache-Control = "max-age=86400"; # Rules for browsers
    set beresp.http.Surrogate-Control = "max-age=31536000"; # Rules for downstream Fastly caches
    unset beresp.http.Expires;
  }
  ```

The standard [VCL boilerplate](/guides/full-site-delivery/fastly-vcl/about-fastly-vcl) (which is also included in any Fastly VCL service that does not use custom VCL) applies some logic that affects freshness:

* If the response has a `Cache-Control: private` header, execute a `return(pass)`.
* If the response has a `Set-Cookie` header, execute a `return(pass)`.
* If the response does not have any of `Cache-Control: max-age`, `Cache-Control: s-maxage` or `Surrogate-Control: max-age` headers, set `beresp.ttl` to the [fallback TTL configured for your Fastly service](/guides/full-site-delivery/caching/controlling-caching/#set-a-fallback-ttl).

> **WARNING:** If you are using custom VCL, the fallback TTL configured via the [web interface](/guides/full-site-delivery/caching/controlling-caching/#set-a-fallback-ttl) or [API](/reference/api/vcl-services/settings/) will not be applied, and the fallback TTL will be as hard-coded into your VCL boilerplate (you're free to remove any of the default interventions, including the fallback TTL logic, if you wish)

</Panel>
<Panel id="compute-services">

During the after-send callback, you can affect the caching behavior in a number of ways:

<TabbedPanels syncGroup="languages">
<Panel id="rust">

- **Modifying Fastly cache TTL**<br/>
  To change the amount of time the readthrough cache interface will cache an object, call the following methods on `CandidateResponse`:
  * `CandidateResponse::set_ttl(&mut self, ttl: Duration)`
  * `CandidateResponse::set_stale_while_revalidate(&mut self, stale_while_revalidate: Duration)`

  For example:
  ```rust compile_fail
  req.set_after_send(|resp| {
    resp.set_ttl(Duration::from_secs(300));
    Ok(())
  });
  ```

  > **HINT:** This will entirely override the TTL that Fastly has determined by parsing the response's cache controls. If your service involves upstream caches, you may want to subtract `Age` manually. See [`Age`](#age) above for more information.

- **Modifying downstream (browser) cache TTL**<br/>
  To change the way that downstream caches (including browsers) treat the resource, override the value of the caching headers attached to the object. Take care if you use [shielding](/guides/concepts/shielding/) since you may also be changing the caching policy of a downstream Fastly cache:

  ```rust compile_fail
  req.set_after_send(|resp| {
    resp.set_header("Cache-Control", "max-age=86400"); // Rules for browsers
    resp.set_header("Surrogate-Control", "max-age=31536000"); // Rules for downstream Fastly caches
    resp.remove_header("Expires");
    Ok(())
  });
  ```

</Panel>
<Panel id="javascript">

- **Modifying Fastly cache TTL**<br/>
  To change the amount of time the readthrough cache interface will cache an object, write to the following properties on `Response`:
  * `resp.ttl: number`
  * `resp.swr: number`

  For example:
  ```javascript
  const backendResp = await fetch(clientReq, {
    backend: 'example_backend',
    cacheOverride: new CacheOverride({
      afterSend(resp) {
        resp.ttl = 300;
      },
    }),
  });
  ```

  > **HINT:** This will entirely override the TTL that Fastly has determined by parsing the response's cache controls. If your service involves upstream caches, you may want to subtract `Age` manually. See [`Age`](#age) above for more information.

- **Modifying downstream (browser) cache TTL**<br/>
  To change the way that downstream caches (including browsers) treat the resource, override the value of the caching headers attached to the object. Take care if you use [shielding](/guides/concepts/shielding/) since you may also be changing the caching policy of a downstream Fastly cache:

  ```javascript
  const backendResp = await fetch(clientReq, {
    backend: 'example_backend',
    cacheOverride: new CacheOverride({
      afterSend(resp) {
        resp.headers.set('Cache-Control', 'max-age=86400'); // Rules for browsers
        resp.headers.set('Surrogate-Control', 'max-age=31536000'); // Rules for downstream Fastly caches
        resp.headers.delete('expires');
      },
    }),
  });
  ```

</Panel>
<Panel id="go">

- **Modifying Fastly cache TTL**<br/>
  To change the amount of time the readthrough cache interface will cache an object, call the following methods on `CandidateResponse`:
  * `func (cr *CandidateResponse) SetTTL(ttl uint32)`
  * `func (cr *CandidateResponse) SetStaleWhileRevalidate(swr uint32)`

  For example:
  ```go
  r.CacheOptions.AfterSend = func(cr *fsthttp.CandidateResponse) error {
    cr.SetTTL(300)
    return nil
  }
  ```

  > **HINT:** This will entirely override the TTL that Fastly has determined by parsing the response's cache controls. If your service involves upstream caches, you may want to subtract `Age` manually. See [`Age`](#age) above for more information.

- **Modifying downstream (browser) cache TTL**<br/>
  To change the way that downstream caches (including browsers) treat the resource, override the value of the caching headers attached to the object. Take care if you use [shielding](/guides/concepts/shielding/) since you may also be changing the caching policy of a downstream Fastly cache:

  ```go
  r.CacheOptions.AfterSend = func(cr *fsthttp.CandidateResponse) error {
    cr.SetHeader("Cache-Control", "max-age=86400")  // Rules for browsers
    cr.SetHeader("Surrogate-Control", "max-age=31536000")  // Rules for downstream Fastly caches
    cr.DelHeader("Expires")
    cr.SetTTL(4000)
    return nil
  }
  ```

</Panel>
 -->

</Panel>
</TabbedPanels>

### Cache outcome

<!-- TabbedPanels component: 
<Panel id="cdn-services">

After parsing the response for freshness information and executing the `vcl_fetch` subroutine, the readthrough cache decides whether to save the object based on the following criteria, in this order of priority:

|  | Outcome                | Trigger | Result                 |
|--|------------------------|---------|------------------------|
| 1 | **Deliver stale** | `return(deliver_stale)` is executed in `vcl_fetch` (see [more about stale content](/guides/concepts/cache/stale) for details). | An existing, stale object is served from the cache.<br /><br />The downloaded response is discarded, regardless of its cacheability or proposed TTL. No changes are made to the cache. |
| 2 | **Deliver uncached** | The content is deemed _uncacheable_ or has a total TTL[^1] of zero.<br /><br />Fastly's cache deems a response uncacheable based on its HTTP status and other factors, following the [HTTP Caching RFC](https://datatracker.ietf.org/doc/html/rfc9111). The default behavior of the readthrough cache also excludes responses that include a `set-cookie` header.<br /><br />This behavior can be overridden using `beresp.cacheable`. | The new response is served to the end user, and no record is made in the cache. Requests queued up due to [request collapsing](/guides/concepts/cache/request-collapsing) are dequeued and forwarded individually to the backend. |
| 3 | **Cache and pass** | `return(pass)` is executed in `vcl_fetch`. | The new response is served to the end user, and an empty [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass) object is saved into the cache. This object exists to allow subsequent requests to proceed directly to a backend fetch without being queued by [request collapsing](/guides/concepts/cache/request-collapsing).<br /><br /> The hit-for-pass object is stored for the duration specified by its TTL, but subject to a minimum of 120 and a maximum of 3690 seconds. |
| 4 | **Cache and deliver** | All other cases (`return(deliver)` either explicitly or implicitly). | The new response is served to the end user, used to satisfy queued requests, and stored in cache for up to the duration specified by its TTL. |

</Panel>
<Panel id="compute-services">

After parsing the response for freshness information, the readthrough cache decides whether to save the object based on the following criteria, in this order of priority:

|  | Outcome                | Trigger | Result                 |
|--|------------------------|---------|------------------------|
| 1 | **Deliver uncached** | The content is deemed _uncacheable_ or has a total TTL[^2] of zero.<br /><br />Fastly's cache deems a response uncacheable based on its HTTP status and other factors, following the [HTTP Caching RFC](https://datatracker.ietf.org/doc/html/rfc9111). The default behavior of the readthrough cache also excludes responses that include a `set-cookie` header. | The new response is served to the client.<br /><br />The readthrough cache interface uses heuristics to determine whether to record a [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass) object in the cache: requests queued up due to [request collapsing](/guides/concepts/cache/request-collapsing) are dequeued and forwarded individually to the backend, and this object exists to allow subsequent requests to proceed directly to a backend fetch without being queued by request collapsing. This effectively disables request collapsing until a cacheable response is received.<br /><br />The hit-for-pass object is created in many cases, but for example excludes error statuses, so that the platform does not overload a failing backend. |
| 2 | **Cache and deliver** | All other cases | The new response is served to the client, used to satisfy queued requests, and stored in cache for up to the duration specified by its TTL. |

<TabbedPanels syncGroup="languages">
<Panel id="rust">

In a Compute application written in Rust, this behavior can be overridden during an [after-send](/guides/concepts/cache#after-send-callback) callback:

* `CandidateResponse::is_cacheable(&self) -> bool` - return a value indicating whether the response would be stored into the cache.
* `CandidateResponse::set_cacheable(&mut self)` - force the response to be stored in the cache, even if its headers or status would normally prevent that.
* `CandidateResponse::set_uncacheable(&mut self, record_uncacheable: bool)` - set the response _not_ to be stored in the cache.
    * If the `record_uncacheable` parameter is `true`, a hit-for-pass object is stored into the cache. Otherwise, no record is made in the cache.

</Panel>
<Panel id="javascript">

In a Compute application written in JavaScript, this behavior can be overridden by returning a `CacheOptions` object from the [after-send](/guides/concepts/cache#after-send-callback) callback, setting a value for `cache`:

* `{ cache: true }` - force the response to be stored in the cache, even if its headers or status would normally prevent that.
* `{ cache: false }` - set the response _not_ to be stored in the cache.
* `{ cache: 'uncacheable' }` - set the response _not_ to be stored in the cache, and instead store a hit-for-pass object into the cache.

</Panel>
<Panel id="go">

In a Compute application written in Go, this behavior can be overridden during an [after-send](/guides/concepts/cache#after-send-callback) callback:

* `func (cr *CandidateResponse) SetCacheable()` - force the response to be stored in the cache, even if its headers or status would normally prevent that.
* `func (cr *CandidateResponse) SetUncacheable()` - set the response _not_ to be stored in the cache.
* `func (cr *CandidateResponse) SetUncacheableDisableCollapsing()` - set the response _not_ to be stored in the cache, and instead store a hit-for-pass object into the cache.

</Panel>
 -->

</Panel>
</TabbedPanels>

[^1]: "Total TTL" is `beresp.ttl` + `beresp.stale_while_revalidate` + `beresp.stale_if_error`
[^2]: "Total TTL" is `resp.get_ttl()` + `resp.get_stale_while_revalidate()`

> **IMPORTANT:** Objects may not be stored for the full TTL requested, as they may get evicted earlier in favor of more popular objects, especially if they are large. Objects are not automatically evicted when they reach their TTL, they simply become [stale](/guides/concepts/cache/stale).

If you are experiencing a slow request rate or timeouts on uncacheable resources, it may be because they are forming queues that can be solved by creating a [hit-for-pass](/guides/concepts/cache/request-collapsing/#hit-for-pass). For more details, see [request collapsing](/guides/concepts/cache/request-collapsing).

## Stale objects and revalidation

An object that has reached its TTL becomes stale. If an object is requested while it is stale, it may trigger a revalidation request to the backend. Learn more about [staleness and revalidation](/guides/concepts/cache/stale).

## Preventing content from being cached

Since Fastly respects HTTP caching semantics in the readthrough cache, the best way to avoid caching content is to set the appropriate `Cache-Control` header on responses at the backend.

### Preventing caching at the edge and in browsers

Responding with the following header will ensure that the object will not be cached by Fastly (the `private` directive), and that it will not be cached by any other downstream cache, such as a browser (both `private` and `no-store` directives):

```http
Cache-Control: private, no-store
```

### Cache at the edge, not in browsers

You may want the content to be cached by Fastly but not by browsers. You can do this purely in the initial HTTP response header from the backend:

```http
Cache-Control: s-maxage=3600, max-age=0
```

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In a VCL service, you can apply an override in `vcl_fetch`:

```vcl
set beresp.http.Cache-Control = "private, no-store"; # Don't cache in the browser
set beresp.ttl = 3600s; # Cache in Fastly
set beresp.ttl -= std.atoi(beresp.http.Age);
return(deliver);
```

</Panel>
<Panel id="compute-services">

In a Compute service, you can apply an override in the [after-send](/guides/concepts/cache#after-send-callback) callback:

<TabbedPanels syncGroup="languages">
<Panel id="rust" full>

```rust compile_fail
req.set_after_send(|resp| {
  resp.set_header("Cache-Control", "private, no-store"); // Don't cache in the browser
  resp.set_cacheable(); // Cache in Fastly
  resp.set_ttl(Duration::from_secs(3600) - resp.get_age());
  Ok(())
});
```

</Panel>
<Panel id="javascript" full>

```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    afterSend(resp) {
      resp.headers.set('Cache-Control', 'private, no-store'); // Don't cache in the browser
      resp.ttl = 3600 - resp.age;
      return {
        cache: true, // Cache in Fastly
      };
    },
  }),
});
```

</Panel>
<Panel id="go" full>

```go
r.CacheOptions.AfterSend = func(cr *fsthttp.CandidateResponse) error {
  cr.SetHeader("Cache-Control", "private, no-store")  // Don't cache in the browser
  cr.SetCacheable()  // Cache in Fastly
  age, err := cr.Age()
  if err != nil {
    return fmt.Errorf("cr.Age(): %w", err)
  }
  cr.SetTTL(3600 - age)
  return nil
}
```

</Panel>
 -->

</Panel>
</TabbedPanels>

### Cache in browsers, not at the edge

Fastly will not cache `private` content, making it a good way to apply this kind of differentiated caching policy via a single header attached to the response from your origin server:

```http
Cache-Control: private, max-age=3600
```

<!-- TabbedPanels component: 
<Panel id="cdn-services">

In a VCL service, you can also apply the same logic in `vcl_fetch`:

```vcl
set beresp.http.Cache-Control = "max-age=3600"; # Cache in the browser
return(pass); # Don't cache in Fastly
```

</Panel>
<Panel id="compute-services">

In a Compute service, you can apply an override in the [after-send](/guides/concepts/cache#after-send-callback) callback:

<TabbedPanels syncGroup="languages">
<Panel id="rust" full>

```rust compile_fail
req.set_after_send(|resp| {
  resp.set_header("Cache-Control", "max-age=3600"); // Cache in the browser
  resp.set_uncacheable(false); // Don't cache in Fastly
  Ok(())
});
```

</Panel>
<Panel id="javascript" full>

```javascript
const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    afterSend(resp) {
      resp.headers.set('Cache-Control', 'max-age=3600'); // Cache in the browser
      return {
        cache: false, // Don't cache in Fastly
      };
    },
  }),
});
```

</Panel>
<Panel id="go" full>

```go
r.CacheOptions.AfterSend = func(cr *fsthttp.CandidateResponse) error {
  cr.SetHeader("Cache-Control", "max-age=3600")  // Cache in the browser
  cr.SetUncacheable()  // Don't cache in Fastly
  return nil
}
```

</Panel>
 -->

</Panel>
</TabbedPanels>

## Overriding cache behavior on requests

<div id="pre-defining-cache-behavior-on-requests"></div>

Sometimes you may know what cache behavior you'd like for the response before forwarding a request to the backend.

For details, see the following sections.

* [Bypassing the readthrough cache for a request](/guides/concepts/cache#bypassing-the-readthrough-cache-for-a-request)
* [Setting cache policy on a request](/guides/concepts/cache#setting-cache-policy-on-a-request)

> **IMPORTANT:** As noted in [cache outcome](#cache-outcome) above, where requests are flagged to bypass the readthrough cache or have an override TTL of 0, the response will never be cached.

## Divergences from RFC 9111

The behavior of Fastly's readthrough cache is based on the [HTTP Caching standard (RFC 9111)](https://httpwg.org/specs/rfc9111.html), with some exceptions.

<!-- TabbedPanels component: 
<Panel id="cdn-services">

* The existence of the `Authorization` header in the request does not prevent a response from being cached.

    To prevent requests with an `Authorization` header from receiving a cached response, add a snippet to the `vcl_recv` subroutine:

    ```vcl context="sub vcl_recv { ... }"
    if (req.http.Authorization) {
      return(pass);
    }
    ```

</Panel>
<Panel id="compute-services">

* The existence of the `Authorization` header in the request does not prevent a response from being cached.

    <TabbedPanels syncGroup="languages">
    <Panel id="rust">

    To prevent requests with an `Authorization` header from receiving a cached response, call `.set_pass(true)` on the requests.

    ```rust compile_fail
    if req.get_header("Authorization").is_some() {
      req.set_pass(true);
    }
    req.send("my_backend_name");
    ```

    Refer to the Rust SDK documentation for [`set_pass`](https://docs.rs/fastly/latest/fastly/struct.Request.html#method.set_pass) to understand the behavior of this function and its interaction with similar `set_` functions in the SDK.

    </Panel>
    <Panel id="javascript">

    To prevent requests with an `Authorization` header from receiving a cached response, construct a `CacheOverride` object with the value `"pass"` and include it in the `fetch()` call.

    ```js
    import { CacheOverride } from "fastly:cache-override";

    const response = fetch(event.request, {
      backend: "my_backend_name",
      cacheOverride: event.request.headers.has("Authorization") ? new CacheOverride("pass") : undefined,
    });
    ```

    </Panel>
    <Panel id="go">

    To prevent requests with an `Authorization` header from receiving a cached response, set the value of the `Pass` field of the `CacheOptions` field on the `Request` object to `true`.

    ```go
    if r.Header.Get("Authorization") != "" {
      r.CacheOptions.Pass = true
    }
    resp, err := r.Send(ctx, "my_backend_name")
    ```

    </Panel>
     -->

* When the `Surrogate-Control` and `Cache-Control` headers are both present on a response, Compute targets `Surrogate-Control` before `Cache-Control` and honors the first targeted header that parses successfully. This may cause values of `Surrogate-Control` to take priority over more strict values of `Cache-Control` on a shared-cache response.

* `Cache-Control: must-revalidate` and `Cache-Control: proxy-revalidate` are ignored in Compute. Once a response is stale, it can still be reused during a stale-serving path such as `stale-while-revalidate`.

* Because Compute is designed to handle requests coming from untrusted end users, request-side HTTP attributes do not directly affect cache behavior. For example:
   * Request-side cache controls, such as the `Cache-Control` request header do not affect whether the readthrough cache will store the response. Note, however, that any cache controls on the response _do_ affect cache behavior.
   * Successful responses to "unsafe" HTTP methods (`PUT`, `DELETE`, etc.) do not cause the corresponding cached `GET` response to be invalidated.

</Panel>
</TabbedPanels>

## Related content

* [Caching best practices](/guides/full-site-delivery/caching/caching-best-practices)
* [Cache settings API](/reference/api/vcl-services/cache-settings/)
