Director

A director declaration groups instances of backend into a list and defines a policy for choosing a member of the list, with the aim of distributing traffic across the backends. This is typically used for load balancing.

Directors vary in syntax depending on their policy. See policy variants.

Like backends, directors can be assigned to req.backend, and can also be used as a backend in other directors. Directors also have a health status, which is calculated as an aggregation of the health status of their constituent backends. The rules on whether a particular director is healthy or not depend on the configuration of the director and what type of policy the director uses. See quorum and health.

Policy variants

Directors are offered in the following policy variations:

Random

The random director selects a backend randomly from its members, considering only those which are currently healthy.

FieldProperty ofRequiredDescription
.retriesDirectorNoThe number of times the director will try to find a healthy backend or connect to the randomly chosen backend if the first connection attempt fails. If .retries is not specified, then the director will use the number of backend members as the retry limit.
.quorumDirectorNoThe percentage threshold that must be reached by the cumulative .weight of all healthy backends in order for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend.
.weightBackendYesEach backend has a .weight attribute that indicates the weighted probability of the director selecting the backend. For example, a backend with weight 2 will be selected twice as often as one with weight 1.

In the following example, the random director will choose F_backend1 half the time, and the other two backends 25% of the time each. At minimum, two backends must be healthy for their cumulative weight (~ 66%) to exceed the 50% quorum weight and qualify the director itself as healthy. If only one backend is healthy and the quorum weight is not reached, a 503 error containing "Quorum weight not reached" will be returned to the client if this director is the backend for the request. If the random director fails to connect to the chosen backend, it will retry randomly selecting a backend up to three times before indicating all backends are unhealthy.

director my_dir random {
.quorum = 50%;
.retries = 3;
{ .backend = F_backend1; .weight = 2; }
{ .backend = F_backend2; .weight = 1; }
{ .backend = F_backend3; .weight = 1; }
}

In general, a random director will result in an even and stable traffic distribution.

Fallback

The fallback director always selects the first healthy backend in its backend list to send requests to. If Fastly fails to establish a connection with the chosen backend, the director will select the next healthy backend in the list.

This type of director is the simplest and has no properties other than the .backend field for each member backend.

In the following example, the fallback director will send requests to F_backend1 unless its health status is unhealthy. If Fastly is unable to connect to F_backend1 (e.g., a connection timeout is encountered), the director will select the next healthy backend. If all backends in the list are unhealthy or all backends fail to accept connections, a 503 response containing "All backends failed" or "unhealthy response" is returned to the client.

director my_dir fallback {
{ .backend = F_backend1; }
{ .backend = F_backend2; }
{ .backend = F_backend3; }
}

In a fallback director, all traffic goes to one constituent backend, so this kind of director is not used as a load balancing mechanism.

Content

The hash director will select backends based on the cache key of the content being requested.

FieldProperty ofRequiredDescription
.quorumDirectorNoThe percentage threshold that must be reached by the cumulative .weight of all healthy backends in order for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend.
.weightBackendYesThe weighted probability of the director selecting the backend. For example, a backend with weight 2 will be selected twice as often as one with weight 1.

In this example, traffic will be distributed across three backends, and requests with the same properties will always select the same backend, provided it is healthy. It does not matter who is making the request.

director the_hash_dir hash {
.quorum=20%;
{ .backend=F_origin_0; .weight=1; }
{ .backend=F_origin_1; .weight=1; }
{ .backend=F_origin_2; .weight=1; }
}

A hash director will not necessarily balance traffic evenly across the member backends. If one object on your website is more popular than others, the backend that the director associates with that object's cache key may receive a disproportionate amount of traffic. The hash director will prioritize achieving an allocation of keys to each backend that is in proportion to the director's weights, rather than maintaining a stable mapping of keys to specific backends.

The same principle can be achieved with a chash director configured with .key=object, and this method offers different trade-offs.

Client

A client director will select a backend based on the identity of the client, expressed by client.identity, which by default is populated from client.ip. This is commonly known as 'sticky session load balancing' and often used to lock a user to a nominated backend in order to make use of server-side session state, like a shopping cart.

FieldProperty ofRequiredDescription
.quorumDirectorNoThe percentage threshold that must be reached by the cumulative .weight of all healthy backends in order for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend.
.weightBackendYesEach backend has a .weight attribute that indicates the weighted probability of the director selecting the backend. For example, a backend with weight 2 will be selected twice as often as one with weight 1.

In this example, traffic will be distributed across three backends based on the identity of the user, which is derived from an application-specific cookie. Regardless of what URL is being requested, requests from the same user will always be sent to the same backend (provided that it remains healthy):

director the_client_dir client {
.quorum=20%;
{ .backend=F_origin_0; .weight=1; }
{ .backend=F_origin_1; .weight=1; }
{ .backend=F_origin_2; .weight=1; }
}
sub vcl_recv {
set client.identity = req.http.cookie:user_id; # Or omit this line to use `client.ip`
set req.backend = the_client_dir;
#FASTLY recv
}

A client director will not necessarily balance traffic evenly across the member backends. If one user's session makes more requests that other users' sessions, the backend that the director associates with that client identity may receive a disproportionate amount of traffic. The client director will prioritize achieving an allocation of users to each backend that is in proportion to the director's weights, rather than maintaining a stable mapping of users to specific backends.

The same principle can be achieved with a chash director configured with .key=client, and this method offers different trade-offs.

Consistent hashing

The chash director will select which backend should receive a request according to a consistent hashing algorithm. Depending on the .key property in the declaration, the chash director selects backends either based on the cache key of the content being requested (.key=object) or based on the identity of the client (.key=client). The former (object) is the default when the .key property is not explicitly specified. Commonly, consistent hashing on the cache key is used to 'shard' a large dataset across multiple backends.

FieldProperty ofRequiredDescription
.keyDirectorNoEither object (select backends based on the cache key of the content being requested) or client (select backends based on the identity of the client, expressed by client.identity, which by default is populated from client.ip). The default is object.
.seedDirectorNoA 32-bit number specifying the starting seed for the hash function. The default is 0.
.vnodes_per_nodeDirectorNoHow many vnodes to create for each node (backend under the director). The default is 256. There is a limit of 8,388,608 vnodes in total for a chash director.
.quorumDirectorNoThe percentage threshold that must be reached by the cumulative .weight of all healthy backends for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend.
.idBackendYesAn attribute that is combined with the cache key to calculate the hash. If the ID is changed, reshuffling will occur and objects may shift to a different backend.

This is the same mechanism Fastly uses in our clustering process to decide which server a cached object resides on in a POP. Consistent hashing means that the assignment of requests to backends will change as little as possible when a backend is added to the pool or removed from the pool. When a member of the pool goes offline, requests that would normally go to that backend will be distributed among the remaining backends. Similarly, when a backend comes back online, it takes over a little bit of every other backend's traffic without disturbing the overall mapping more than necessary.

In this example, traffic will be distributed across three backends. Requests with the same properties will always select the same backend, provided it is healthy.

director the_chash_dir chash {
{ .backend = s1; .id = "s1"; }
{ .backend = s2; .id = "s2"; }
{ .backend = s3; .id = "s3"; }
}

A chash director is similar in effect to a hash or client director (depending on the value of the key property), but while hash and client directors will prioritize the even distribution of keys across the constituent backends of the director, a chash director will prioritize the stable mapping of each individual key to the target backend.

As a result, when backends in a chash director become unhealthy, there is much less reallocation of keys between the other backends than would be seen in a hash or client director. However, a chash director will also allocate traffic less evenly.

Importance to shielding

Directors are an integral part of Fastly's shielding mechanism, which collects and concentrates traffic from across the Fastly network into a single POP. When you enable shielding for a backend in your service, Fastly will generate a shield director in the declarations space of your VCL, and add shield selection logic into the #FASTLY recv macro in your vcl_recv subroutine.

To see the shield director and logic generated by enabling shielding, download the generated VCL for your service.

Quorum and health

In VCL, each backend and director has a health status, which can be either sick or healthy. The status of an individual backend is determined by sending regular health check requests to the backend and testing for an expected response. You can query the health of the currently selected backend using the req.backend.healthy VCL variable, or the health of a nominated backend with backend.{NAME}.healthy.

Directors also have a health status but, rather than being the direct result of a health check, it is a derivative of the health status of its member backends. The 'quorum' value allows this to be tuned for some director policies, as described above. This can become complex if a director is a member of another director.

A common use case for quorum is where individual backends in a director cannot individually handle the load of inbound traffic. When enough of the backends in the director are offline that the remainder would be overwhelmed, it can be preferable to consider the entire director unhealthy, as this can help to facilitate switching of traffic to a different director, which has a larger set of healthy backends, or to take the site offline entirely to help the backends recover faster.

Consistent Hashing Visualized

Here's an example of how consistent hashing works.

For this example, we'll use a simplified setup with four vnodes per server to make the concept easier to follow. Keep in mind that the diagrams are not drawn to scale, and a ring with only four vnodes per server would distribute load very unevenly in practice. You'd typically use many more vnodes. Fastly's VCL's chash director defaults to 256 vnodes per node and supports up to 8,388,608 total vnodes across all servers. The more vnodes you use, the more evenly traffic gets distributed.

Setting Up the Hash Ring

Consider three server machines named "A", "B", and "C". The goal is to distribute incoming requests evenly across these servers.

To accomplish this, imagine arranging these servers in a circle. Rather than physically moving the servers, you can do this arrangement virtually. In this virtual arrangement, each position where a server appears is called a "node," and each node gets assigned a numerical value called a "hash value." This circular arrangement of hash values is called a "hash ring."

Instead of placing each physical server just once on the ring, you create multiple virtual positions for each server. These virtual positions are called "vnodes" (virtual nodes). By creating multiple vnodes for each server around the ring, you achieve more even load distribution.

To construct the simplified ring:

  1. Hash each server identity multiple times. Using a hashing function h, hash each server name ("A", "B", "C") with four different seed values (0, 1, 2, 3)
  2. Map to ring positions. The hash values range from 0 to 359, creating a ring of size 360
  3. Place vnodes on the ring. Each hash result becomes a vnode position on the ring

For instance, when you hash server "B" with seed 2, you might get the value 6. When you hash server "C" with seed 1, you might get 41. After hashing all servers with all seeds, you end up with twelve vnodes distributed around the ring.

chash-ring-3-nodes

Assigning Objects to Servers

Once the ring is set up, here's how to determine which server handles each request:

  1. Hash the incoming object. Each object or request gets hashed to a value between 0 and 359
  2. Find the next vnode. Starting from the object's hash value, move clockwise around the ring until you find the first vnode
  3. Route to the corresponding server. That vnode tells you which physical server should handle the request

Consider these examples:

  • An object hashing to 10 → next vnode is at position 41 (server "C")
  • An object hashing to 80 → also routes to server "C"
  • An object hashing to 100 → routes to server "B"
  • An object hashing to 300 → routes to server "A"

The colored regions in the diagram show which ranges of hash values each server is responsible for handling.

Adding and Removing Servers

The benefit of consistent hashing becomes apparent when you need to change the number of servers. Let's see what happens when you remove server "C" from the three-server setup.

With only servers "A" and "B". When you reconstruct the ring with just two servers (using the same hashing function and vnode count), the hash ranges previously handled by server "C" get redistributed between the remaining servers. Crucially, objects that were already assigned to servers "A" and "B" mostly stay put.

chash-ring-2-nodes

Looking at the examples:

  • Object hashing to 10: previously "C" → now "A"
  • Object hashing to 80: previously "C" → now "B"
  • Object hashing to 100: still "B" (unchanged)
  • Object hashing to 300: still "A" (unchanged)

Adding a fourth server "D". Similarly, when you add a new server to create a four-server ring, the new server takes over portions of the hash ranges from existing servers, but most assignments remain stable.

chash-ring-4-nodes

This stability is what makes consistent hashing valuable. Only the objects near the boundary points need to be remapped, rather than requiring a complete redistribution of all objects.