---
title: 'Developer guide: Backends'
summary: null
url: >-
  https://www.fastly.com/documentation/guides/integrations/non-fastly-services/developer-guide-backends
---

Most times when Fastly receives a request from an end-user, we deliver a response that we fetch from your server, which we call a backend, or origin. Fastly interacts with thousands of varied backend technologies and supports any backend that is an HTTP/1.1 compliant web server. A huge variety of different kinds of software products and platforms can be used, including:

- **Traditional web servers:** You install and run your own operating system and web server, such as Apache, NGINX or Microsoft IIS (on your own physical hardware, or a virtualized infrastructure provider such as [AWS's EC2](https://aws.amazon.com/ec2/) or [Google Compute Engine](https://cloud.google.com/compute))
- **Platforms as a service:** [Heroku's](https://www.heroku.com) platform-as-a-service (or equivalent products such as [Google App Engine](https://cloud.google.com/appengine) or [DigitalOcean App Platform](https://www.digitalocean.com/docs/app-platform/)) manage routing, operating systems and virtualization, providing a higher-level environment in which to run web server apps.
- **Serverless platforms:** Serverless functions (such as [Google Cloud Functions](https://cloud.google.com/functions) or [AWS Lambda](https://aws.amazon.com/lambda/), sometimes known as 'functions as a service') can be extremely cost effective backends that only charge you when they are invoked, and can scale effortlessly - but as an even higher level abstraction, they offer less flexibility.
- **Static bucket storage:** Services such as [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html) or [Google Cloud Storage](https://cloud.google.com/storage/) are popular and relatively inexpensive ways to connect Fastly to a set of static resources with no compute capability at all.

## Creating backends

Backends can be configured statically in multiple ways. Refer to [core concepts](https://www.fastly.com/documentation/guides/concepts#setting-up-a-backend) for instructions. [Dynamic backends](https://www.fastly.com/documentation/guides/integrations/non-fastly-services/developer-guide-backends#dynamic-backends) can be configured at runtime in Compute services.

> **IMPORTANT:** When setting up a static backend, configure the **host header override** and **SSL hostname**. These should almost always be set to the same hostname as you are using to identify the location of the backend. [Learn more](https://www.fastly.com/documentation/guides/integrations/non-fastly-services/developer-guide-backends#overriding-the-host-header).

### Dynamic backends

In Compute services, the Dynamic Backends feature can be used to register a backend at runtime, using methods available in Compute [language SDKs](https://www.fastly.com/documentation/reference/compute/sdks).

> **HINT:** If you're using a [free account](https://www.fastly.com/documentation/guides/account-info/billing/account-types), you may need to contact [support](https://support.fastly.com) to access this feature.

> **NOTE:** This page contains interactive code editors in multiple languages. Visit the URL to try them.

> **IMPORTANT:** As a best practice, define dynamic backend hostnames using a config store or application logic, rather than hardcoding static strings in your source code. **DO NOT** pass unvalidated user input directly to the backend registration function. Instead, use user input as a lookup key to retrieve authorized hostnames from a secure source like a config store.

#### Registration scope and reuse

While dynamic backends offer flexibility, they operate under specific scoping and reuse rules to ensure performance and stability.

Compute registers dynamic backends at the **node level**. For efficiency, the system attempts to reuse these registrations whenever possible. If a subsequent request arrives on the same node within a short, undisclosed timeframe, it may leverage the existing registration. This optimization applies regardless of whether you are using [reusable sandboxes](https://www.fastly.com/documentation/guides/compute/developer-guides/sandbox-lifecycle/).

#### The "same name" rule

The behavior of registering a backend with a name already in use depends entirely on the properties provided:

- **Identical properties:** If you attempt to register a backend with a `name` that already exists on the node and all provided properties (target, timeouts, etc.) are identical, the call will succeed. This makes it safe to "re-register" the same backend across different requests.
- **Conflicting properties:** If the `name` is identical but any property differs, the registration attempt will fail. Even if it points to the same target URL, the system views a change in configuration as a collision.

| Scenario                                | Result      | Why?                                      |
| :-------------------------------------- | :---------- | :---------------------------------------- |
| **Name A** + **Config A** (First time)  | Success     | New registration created.                 |
| **Name A** + **Config A** (Second time) | Success     | Reuses existing registration on the node. |
| **Name A** + **Config B**               | **Failure** | Name collision with differing properties. |
| **Name B** + **Config B**               | Success     | Unique name prevents conflict.            |

To maximize reuse and eliminate collisions, you should build backend names based on their critical attributes.

> **HINT:** Generate a name by hashing the target URL and its configuration settings (e.g., `prod-db-us-east-1-500ms`). This ensures that identical configurations naturally share a registration, while any change in settings automatically generates a unique name.

## Selecting backends

Static backends are defined in the same way in all Fastly services, but the way you select the backend to use for a particular fetch operation differs significantly between VCL and the Compute platform.

### VCL

In [VCL services](https://www.fastly.com/documentation/guides/full-site-delivery/fastly-vcl), the `req.backend` variable indicates which backend to use when forwarding a client request. By default, Fastly will generate VCL that will assign the request to the backend for you, so if you have only one backend, there's nothing more to do. If you have more than one backend and don't want to write custom VCL, you can configure all backends to have [automatic load balancing](https://www.fastly.com/documentation/guides/concepts/load-balancing#automatic-load-balancing), or assign [Conditions](https://www.fastly.com/documentation/guides/full-site-delivery/conditions/using-conditions) to each backend in the web interface.

If you do want to use custom VCL, you first need to know what the "VCL name" is for the backend. This name is a normalized version of the name given to the backend in the web interface or API, usually (but not always) prefixed with `F_`. To discover the names assigned to the backends in your service, click **Show VCL** in the web interface, or [download the generated VCL via the API](https://www.fastly.com/documentation/reference/api/vcl-services/vcl/#get-custom-vcl-generated), and locate and examine the backend definitions. For example, if your backend is called "Host 1", the VCL name would most likely be `F_Host_1`.

Add your VCL code to select a backend after the `#FASTLY...` line in the appropriate subroutine of your VCL. Usually, backend assignment is done in `vcl_recv`, but can also be done in `vcl_miss` or `vcl_pass`.

```vcl context="sub vcl_recv { ... }"
#FASTLY RECV
if (req.path ~ "^/account(?:/.*)?\z") {
  set req.backend = F_Account_Microservice;
}
```

> **WARNING:** It is not possible to override the default backend using [VCL snippets](https://www.fastly.com/documentation/guides/full-site-delivery/fastly-vcl/about-fastly-vcl/#vcl-snippets) because VCL snippets are inserted into generated VCL before the default backend is assigned, so the default assignment would overwrite your custom one.

#### Interaction with shielding

In VCL services, backends may be configured to perform [shielding](https://www.fastly.com/documentation/guides/concepts/shielding), in which a fetch from a Fastly POP to a backend will first be forwarded to a second nominated Fastly POP, if the request is not already being processed by that nominated "shield" POP. When shielding is used, it is important to allow Fastly to choose the shield POP instead of the backend server when appropriate. This happens automatically if you use conditions to select backends, but if you use custom VCL, see [shielding with multiple origins](https://www.fastly.com/documentation/guides/concepts/shielding#multiple-backends) in our shielding guide.

### Fastly Compute

In [Compute](https://www.fastly.com/documentation/guides/compute) services, fetches that use static backends must explicitly specify the backend for each fetch, and the identifier for the backend is exactly as it appears via the API or web interface.

### Rust

```rust
use fastly::{Error, Request, Response};
const BACKEND_NAME: &str = "custom_backend";

#[fastly::main]
fn main(req: Request) -> Result {
    Ok(req.send(BACKEND_NAME)?)
}
```

### Javascript

```js
addEventListener("fetch", event => {
  const req = event.request;
  const backendResponse = fetch(req, { backend: "custom_backend" });
  event.respondWith(backendResponse);
});
```

> **HINT:** If you are using a Compute service with a static bucket host like Google Cloud Storage or Amazon S3, consider using a [starter kit](https://www.fastly.com/documentation/solutions/starters) designed to work with static hosting services.

Dynamic backends can be referenced in the same way as static backends, using the backend constructor specific to the language SDK you are using. In JavaScript, dynamic backends can also be used implicitly, by omitting a backend property in the `fetch()` call:

```js
/// <reference types="@fastly/js-compute" />

async function app() {
  // For any request, return the fastly homepage -- without defining a backend!
  return fetch('https://www.fastly.com/');
}
addEventListener("fetch", event => event.respondWith(app(event)));
```

The Compute platform does not currently support automatic load balancing or shielding.

## Overriding the `Host` header

If you use a hostname (rather than an IP address) to define your backend, Fastly will only use the hostname to look up the IP address of the server, not to set the `Host` header or negotiate a secure connection. By default the `Host` header on backend requests is copied from the client request. For example, if you own `www.example.com` and point it to Fastly, and create a second domain of `origin.example.com` that points to your origin, then the web server running on your origin and serving `origin.example.com` must also be able to serve requests that have a `Host: www.example.com` header. Fastly will also use the client-forwarded hostname to establish a secure connection using [Server Name Indication](https://en.wikipedia.org/wiki/Server_Name_Indication).

This is often undesirable behavior and may not be compatible with static bucket hosts or serverless platforms. Therefore, when creating backends on your Fastly service, consider setting all of the properties `address`, `override_host`, `ssl_sni_hostname` and `ssl_cert_hostname` to the same value: the hostname of the backend (e.g., "example.com").

The CLI command **<kbd>fastly backend create</kbd> will do this automatically**. Using the API, web interface or VCL code, you must set these properties separately.

This is also essential for [service chaining](https://www.fastly.com/documentation/guides/getting-started/services/service-chaining), and for many hosting providers, such as [Heroku](https://www.heroku.com) and [AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html).

### Static bucket providers

Most bucket providers require a `Host` header that identifies the bucket, and often the region in which the bucket is hosted:

| Service                        | `Host` header                                  |
| ------------------------------ | ---------------------------------------------- |
| Amazon S3                      | `{BUCKET}.s3.{REGION}.amazonaws.com`           |
| Alibaba Object Storage Service | `{BUCKET}.{REGION}.aliyuncs.com`               |
| Backblaze (S3-Compatible mode) | `{BUCKET}.s3.{REGION}.backblazeb2.com`         |
| DigitalOcean Spaces            | `{SPACE}.{REGION}.digitaloceanspaces.com`      |
| Google Cloud Storage           | `{BUCKET}.storage.googleapis.com`              |
| Microsoft Azure Blob Storage   | `{STORAGE_ACCOUNT_NAME}.blob.core.windows.net` |
| Wasabi Hot Cloud Storage       | `{BUCKET}.s3.{REGION}.wasabisys.com`           |

### Serverless and PAAS platforms

Most platform-as-a-service providers require that requests carry a `Host` header with the hostname of your app, not the public domain of your Fastly service.

| Service | `Host` header              |
| ------- | -------------------------- |
| Heroku  | `{app-name}.herokuapp.com` |

## Modifying the request path

In some cases, you may need to modify the path of the request URL before it is passed to a backend. There are a few possible reasons for this, the two most common of which result from using a **static bucket provider**:

- **Bucket selection**: Where the bucket provider requires the URL path to be prefixed with the bucket name.
- **Directory indexes**: Some providers do not support automatically loading directory index files for directory-like paths. For example, the path `/foo/` may return an "Object not found" error, even though `/foo/index.html` exists in the same bucket. If your provider doesn't support automatic directory indexes, you can add the appropriate index filename to the path.

The following providers _require_ path modifications to select the right bucket:

| Service             | Path modification       |
| ------------------- | ----------------------- |
| Backblaze (B2 mode) | `/file/{BUCKET}/{PATH}` |

> **HINT:** If a bucket provider supports selecting a bucket using both a path and a hostname, we recommend using the hostname method.

In VCL services, path modifications are best performed in `vcl_miss`, which has access to the `bereq` object, to avoid mutating the original client request. In a Compute program, the modification can generally be done on a request instance or a clone of it, before sending it to a backend:

### Vcl

```vcl
context="sub vcl_miss { ... }"
if (req.method == "GET" && req.backend.is_origin) {
  set bereq.url = "/file/YOUR_BUCKET_NAME" + req.url;
  if (req.url.path ~ "/\z") {
    set req.url.path = req.url.path + "index.html";
  }
}
```

### Rust

```rust
use fastly::{Error, Request, Response};

const BACKEND_NAME: &str = "example_backend";
const BUCKET_NAME: &str = "my-bucket";

#[fastly::main]
fn main(mut req: Request) -> Result {

    let path = req.get_path();
    let page = if path.ends_with('/') {
        "index.html"
    } else {
        ""
    };
    let path_with_bucket = format!("/{}{}{}", BUCKET_NAME, path, page);
    req.set_path(&path_with_bucket);

    // Send the request to backend
    Ok(req.send(BACKEND_NAME)?)
}
```

### Javascript

```js
addEventListener("fetch", event => {
  // section visible
  const req = event.request;
  const url = new URL(req.url);
  url.pathname = "/file/YOUR_BUCKET_NAME" + url.pathname;
  if (url.pathname.endsWith('/')) url.pathname += 'index.html';
  const bereq = new Request(url.toString(), req);
  // section-end visible
  const backendResponse = fetch(bereq, { backend: "example_backend" });
  event.respondWith(backendResponse);
});
```

In VCL services with [shielding](https://www.fastly.com/documentation/guides/concepts/shielding/) enabled or which use `restart`, care should be taken to do path modifications only once. To ensure that the modification only affects the request just before it is sent to the origin, check the value of the `req.backend.is_origin` variable.

## Redirecting for directory indexes

Some **static bucket providers** do not support automatically redirecting a directory request that doesn't end with a `/`. For example, a request for `/foo` where the bucket contains a `/foo/index.html` object, will often return an "Object not found" `404` error. If you wish, you can configure Fastly so that in such cases, we retry the origin request, theorising that 'foo' might be a directory, and if we find an object there, redirect the client to it:

### Vcl

```vcl
context="sub vcl_deliver { ... }"
if (resp.status == 404 && req.url !~ "/$" && !req.http.restart-for-dir) {
  set req.http.restart-for-dir = "1";
  set req.url = req.url "/";
  restart;
}
```

### Rust

```rust
use fastly::http::{header, StatusCode};
use fastly::{Error, Request, Response};

const BACKEND_NAME: &str = "example_backend";
const BUCKET_NAME: &str = "my-bucket";

#[fastly::main]
fn main(mut req: Request) -> Result {
    let mut retry_req = req.clone_with_body();

    let path = req.get_path();
    let page = if path.ends_with('/') {
        "index.html"
    } else {
        ""
    };
    let mut path_with_bucket = format!("/{}{}{}", BUCKET_NAME, path, page);
    req.set_path(&path_with_bucket);

    // Send the request to backend
    let resp = req.send(BACKEND_NAME)?;

    if resp.get_status() == StatusCode::NOT_FOUND && !path_with_bucket.ends_with("/index.html") {
        let orig_path = retry_req.get_path().to_string();

        path_with_bucket = format!("/{}{}/index.html", BUCKET_NAME, &orig_path);
        retry_req.set_path(&path_with_bucket);

        // Send the retry request to backend
        let resp_retry = retry_req.send(BACKEND_NAME)?;

        if resp_retry.get_status() == StatusCode::OK {
            // Retry for a directory page has succeeded, redirect externally to the directory URL.
            let new_location = format!("{}/", orig_path);
            let resp_moved = Response::from_status(StatusCode::MOVED_PERMANENTLY)
                .with_header(header::LOCATION, new_location);
            Ok(resp_moved)
        } else {
            Ok(resp_retry)
        }
    } else {
        Ok(resp)
    }
}
```

### Javascript

```js
addEventListener('fetch', event => event.respondWith(handleRequest(event)));

const BACKEND_NAME = "example_backend";
const BUCKET_NAME = "my-bucket";

async function handleRequest(event) {
  const originalReq = event.request;

  const req = event.request;
  const url = new URL(req.url);
  url.pathname = "/file/" + BUCKET_NAME + url.pathname;
  if (url.pathname.endsWith('/')) url.pathname += 'index.html';

  let resp = await fetch(newReq, { backend: BACKEND_NAME });

  // section dir-redirect
  if (resp.status == 404 && !url.pathname.endsWith("/index.html")) {
    // Not found, and not already an index page, try again for the directory index
    url = new URL(originalReq.url);
    let directory_path = url.pathname + "/";
    url.pathname = "/" + BUCKET_NAME + url.pathname + "/index.html";

    // Make a new request from path with directory index page
    newReq = new Request(url, originalReq);
    // Set the host header for backend access
    newReq.headers.set("Host", "storage.googleapis.com");

    // Send the retry request to backend
    resp = await fetch(newReq, {
      backend: BACKEND_NAME,
    });

    if (resp.status == 200) {
      // Retry for a directory index page has succeeded, redirect externally to the directory URL.
      let headers = new Headers({"location": directory_path});
      resp = new Response('', {status: 301, headers});
    }
  }
  // section-end dir-redirect

  return resp;
};
```

## Customizing error pages

When a backend is not working properly or a request is made for a non-existent URL, the backend may return an error response such as a `404`, `500`, or `503`, the content of which you may not be able to control (or predict in advance). If you wish, you can replace these bad responses with a custom, branded error page of your choice. You can encode these error pages directly into your Fastly configuration or, if your service has a static bucket origin, you could use an object from your static bucket to replace the platform provider's error page.

This code example demonstrates both of these mechanisms:

> **NOTE:** This page contains interactive code editors in multiple languages. Visit the URL to try them.

> **HINT:** Some static bucket providers will allow you to designate a particular object in your bucket to serve in the event that an object is not found. If they don't support that, this is a good way to implement the same behavior using Fastly and get support for a range of other error scenarios at the same time.

> **IMPORTANT:** If you are implementing directory redirects _and_ custom error pages, ensure the directory redirect happens first.

## Setting cache lifetime (TTL)

In general, it makes sense for the server that generates a response to attach a caching policy to it (e.g., by adding a `Cache-Control` response header). This allows the server to apply precise control over caching behavior without having to apply blanket policies that may not be suitable in all cases. However, if you do prefer to apply caching policies based on patterns in the URL or content-type, or indeed a blanket policy for all resources, you can use your Fastly configuration to set the TTL. See [HTTP caching semantics](https://www.fastly.com/documentation/guides/concepts/cache/cache-freshness) for more details.

### Static bucket providers

Static bucket providers often allow caching headers to be configured as part of the metadata of the objects in your bucket. Ideally, use this feature to tell Fastly how long you want to keep objects in cache. For example, when uploading objects to [Google Cloud Storage](https://cloud.google.com/storage/), use the [`gsutil`](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata) command:

```term
$ gsutil -h "Content-Type:text/html" -h "Cache-Control:public, max-age=3600" cp -r images gs://bucket/images
```

Setting caching metadata in this way, at the object level, allows for precise control over caching behavior, but you can often also configure a single cache policy to apply to all objects in the bucket

> **HINT:** If your bucket provider can trigger events when objects in your bucket change, and you can attach a serverless function to those events, consider using that mechanism to purge the Fastly cache when your objects are updated or deleted. This allows you to set a very long cache lifetime across the board, and benefit from a higher cache hit ratio and corresponding increased performance. We wrote about [how to do this for Google Cloud Platform](https://www.fastly.com/blog/purge-fastly-gcp-cloud-functions) on our blog.

### Web servers

If using your own hardware, or an infrastructure provider on which you install your own web server (such as [AWS's EC2](https://aws.amazon.com/ec2/), or [Google Compute Engine](https://cloud.google.com/compute)), you will have a great deal more flexibility than with a static bucket host, and somewhat more than with a platform as a service provider. The most important thing to consider when using your own web server installation is the caching headers that you set on responses that you serve to Fastly, most commonly `Cache-Control` and `Surrogate-Control`.

- **Apache**: Consider making use of the [`mod_expires`](https://httpd.apache.org/docs/current/mod/mod_expires.html) module. For example, to cache GIF images for 75 minutes after the image was last accessed, add the following to a directory `.htaccess` file or to the global Apache config file:
  ```plain
  ExpiresActive On
  ExpiresByType image/gif "access plus 1 hours 15 minutes"
  ```

- **NGINX**: Add the `expires` directive:
  ```plain
  location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
    expires 1h;
  }
  ```
  Alternatively, if you need more flexibility in modifying headers you can try the [HttpHeadersMore](https://nginx.org/en/docs/http/ngx_http_headers_module.html) Module.

## Removing metadata

Some hosting providers, particularly **static bucket providers** include additional headers when serving objects over HTTP. You may want to remove these before you serve the file to your end users. This is best done in `vcl_fetch`, where the changes to the object can be made before it is written to the cache:

> **NOTE:** This page contains interactive code editors in multiple languages. Visit the URL to try them.

## Ensuring backend traffic comes only from Fastly

Putting Fastly in front of your backends offers many resilience, security and performance benefits, but those benefits may not be realized if it is also possible to send traffic to the backend directly. Depending on the capabilities of your backend, there are various solutions to ensure that there is no route to your origin except through Fastly.

### IP restriction

We publish a list of the [IP addresses that make up the Fastly IP space](https://www.fastly.com/documentation/reference/api/utils/public-ip-list/).

Restricting access to requests coming only from Fastly IPs is not by itself an effective way to protect your origin because all Fastly customers share the same IP addresses when making requests to origin servers. However, since IP restriction can often be deployed at an earlier point in request processing, it may be useful to combine this with one of the other solutions detailed in this section.

### Shared secret

A simple way to restrict access to your origin is to set a shared secret into a custom header in the Fastly configuration

### Vcl

```vcl
context="sub vcl_recv { ... }"
set req.http.Edge-Auth = "some-pre-shared-secret-string";
```

### Rust

```rust
use fastly::{Error, Request, Response};

const BACKEND_NAME: &str = "backend_name";
#[fastly::main]
fn main(mut req: Request) -> Result {
    req.set_header("edge-auth", "some-pre-shared-secret-string");
    Ok(req.send(BACKEND_NAME)?)
}
```

### Javascript

```js
addEventListener("fetch", async (event) => {
  const req = event.request
  req.headers.set("edge-auth", "some-pre-shared-secret-string");
  event.respondWith(fetch(req, { backend: "example_backend" }));
})
```

To make this solution work, you must configure your backend server to reject requests that don't contain the secret header. This is an effective but fragile solution: if a single request is accidentally routed somewhere other than your origin, the secret will be leaked and is then usable by a bad actor to make any number of any kind of request to your origin.

### Per-request signature

Consider constructing a one-time, time-limited signature within your Fastly service, and verify it in your origin application:

### Vcl

```vcl
context="sub vcl_miss { ... }"
declare local var.edge_auth_secret STRING;
set var.edge_auth_secret = table.lookup(config, "edge_auth_secret"); # Consider using a dictionary
if (!bereq.http.Edge-Auth) {
  declare local var.data STRING;
  set var.data = strftime({"%s"}, now) + "," + server.datacenter;
  set bereq.http.Edge-Auth = var.data + "," + digest.hmac_sha256(var.edge_auth_secret, var.data);
}
```

### Rust

```rust
use fastly::{compute_runtime, Error, Request, Response, ConfigStore};
use std::time::{SystemTime, UNIX_EPOCH, Duration};
use hmac_sha256::HMAC;
use base64::prelude::*;

const BACKEND_NAME: &str = "origin_0";

#[fastly::main]
fn main(mut req: Request) -> Result {
    let config = ConfigStore::open("config");
    if let Some(secret) = config.get("edge_auth_secret") {
        let pop = compute_runtime::pop();
        let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or(Duration::ZERO).as_secs();
        let data = format!("{},{}", now, pop);
        let sig = HMAC::mac(data.as_bytes(), &secret.as_bytes());
        let signed = format!("{}, {}", data, BASE64_STANDARD.encode(sig));
        req.set_header("edge-auth", signed);
    }
    Ok(req.send(BACKEND_NAME)?)
}
```

### Javascript

```js

addEventListener("fetch", async (event) => {
  const req = event.request
  const configDict = new Dictionary("config");  // Consider using a dictionary
  const secret = configDict.get("edge_auth_secret");
  const data = Math.floor(Date.now() / 1000);
  req.headers.set("edge-auth", data + ',' + hmacSha256(data, secret));
  event.respondWith(fetch(req, { backend: "example_backend" }));
})
```

This is slightly harder to verify than a constant string, but if a request leaks and a signature is compromised, it provides only short term access to make a single kind of request.

### Proprietary signatures for cloud service providers

Static bucket providers like Amazon S3 cannot be programmed to support arbitrary signature algorithms like the one above, but they do support a specific type of signature for authentication to protected buckets and individually protected objects.

Although Amazon's signature was created for its S3 service, it is widely supported as a compatibility convenience by many other bucket providers including Backblaze (in S3-Compatible mode), DigitalOcean Spaces, Google Cloud Storage, and Wasabi Hot Cloud Storage. See the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) for more details.

This and other proprietary signatures can be constructed in Fastly services using VCL or the Compute platform. The following examples in our solutions gallery provide reference implementations:

- [Amazon S3](https://www.fastly.com/documentation/solutions/examples/using-s3-compatible-buckets-as-private-origins/)
- [Microsoft Azure](https://www.fastly.com/documentation/solutions/examples/azure-blob-storage-bucket-origin-private/)
- [Alibaba Object Storage Service](https://www.fastly.com/documentation/solutions/examples/alibaba-oss-private)
