About Fastly VCL
Fastly VCL is a domain-specific programming language that evolved from the Varnish proxy cache, forming a core part of Fastly's platform architecture. This scripting language is intentionally limited in scope, allowing it to run extremely fast while maintaining network security and making it available to all requests passing through Fastly. With VCL, you can accomplish everything from simple tasks like adding cookies or setting Cache-Control
headers to complex implementations like complete paywall solutions.
Unlike traditional applications, VCL services don't provide a single entry point for your code. Instead, Fastly exposes built-in subroutines as "hooks" that execute at significant moments during each HTTP request's lifecycle. This approach means your uploaded code functions as a configuration rather than a standalone application. Changes to your VCL can be generated automatically through the Fastly control panel, compiled, and distributed to all Fastly caches worldwide, all without requiring maintenance windows or service downtime.
VCL and what you can do with it
You can create custom VCL files with specialized configurations and upload them into Fastly caches for activation. Fastly supports mixing and matching custom VCL with Fastly-generated VCL, using them together simultaneously. While you retain all control panel options when using custom VCL, keep in mind that custom VCL always takes precedence over any VCL generated by the control panel.
Personal data should not be incorporated into VCL. Our Compliance and Law FAQ describes in detail how Fastly handles personal data privacy.
The VCL request lifecycle
The following subroutines are triggered by Fastly in this order:
Name | Trigger point | Default return state | Alternative return states |
---|---|---|---|
vcl_recv | Client request received | lookup recv-note | pass , error , restart , upgrade |
vcl_hash | A cache key will be calculated | hash hash-note | |
vcl_hit | An object has been found in cache | deliver | pass , error , restart |
vcl_miss | Nothing was found in the cache, preparing backend fetch | fetch | deliver_stale , pass , error |
vcl_pass | Cache bypassed, preparing backend fetch | pass pass-note | error |
vcl_fetch | Origin response headers received | deliver fetch-note | deliver_stale , pass , error , restart |
vcl_error | Error triggered (explicitly or by Fastly) | deliver | restart |
vcl_deliver | Preparing to deliver response to client | deliver | restart |
vcl_log | Finished sending response to client | deliver log-note |
Some subroutines can return error
, restart
, or upgrade
. Any error
return state will result in the execution flow passing to vcl_error
, while restart
will result in the execution flow passing to vcl_recv
. The special upgrade
return state will terminate the VCL flow and create a managed WebSocket connection (learn more).
Adding VCL to your service configuration
Everything that your VCL service does is powered by VCL. Even features that you enable in the web interface or via the API without writing code yourself will ultimately generate VCL code written by Fastly. To support combining your own VCL logic with Fastly's generated code, we include macros in the VCL program, one in each subroutine, such as #FASTLY recv
.
You can mix and match creating configuration using high-level VCL generative objects, VCL snippets, and full custom VCL files, and all are interoperable - though it's typically more maintainable to choose a single approach.
VCL generative objects
Using the web interface or API, you can create configuration objects that generate VCL for you.
HINT: Using these constructs is a good way to get started with VCL services, but if you start to have a lot of them, it may be better to manage your own VCL with a custom VCL file.
Object | Purpose | Instructions |
---|---|---|
Header | Setting HTTP headers or VCL variables | Web interface, API |
Response | Creating a predefined response to be served from the edge | Web interface, API |
Condition | Restricting actions to only requests that meet criteria defined as a VCL expression | Web interface, API |
Apex redirect | Redirecting a bare domain such as example.com to add a www. prefix | API |
Cache settings | Changing the TTL or cache behavior of an HTTP response | Web interface, API |
GZip | Compressing HTTP responses before inserting them into cache | Web interface, API |
HTTP3 | Advertising HTTP/3 support | Web interface |
Rate limiter | Creating rate limiters to stop individual clients from bombarding your site | Web interface, API |
Request settings | Changing the cache behavior of a request (similar to a cache setting but applied before the request has been forwarded to origin) | Web interface, API |
Settings | Update default values for cache TTLs | API |
VCL snippets
By adding your custom VCL code using snippets, you can insert raw code into VCL subroutines alongside Fastly-generated code. Your code snippets will be added at the end of the subroutine you select, which can have an impact on what is possible with snippets.
Snippets can be regular or dynamic.
- Regular snippets are versioned in the same way as the rest of your service. Changes require a new version of the service configuration, and can therefore also be rolled back with a rollback of the service version. These are a good choice for VCL code that performs logical actions like routing, setting headers, or authentication.
- Dynamic snippets are not versioned. After attaching a dynamic snippet to a version of your service and activating it, any subsequent changes you make to the snippet apply immediately. This also means if you roll back a service configuration to a earlier version, and the snippet was present in that earlier version, the snippet will remain unchanged and contain the latest code. Dynamic snippets are useful for including generated logic or declarative data, such as redirection rules or allowlists (although if you can use a dictionary that's typically a better solution).
If you have VCL snippets defined on a service that also has custom VCL, the snippets will typically be rendered as part of the Fastly macro, replacing the placeholders such as #FASTLY recv
that you must include in any custom VCL file. However, if your snippet has a type of "none", you may include the snippet explicitly at any point in your custom VCL file.
Snippets can be included as many times and in as many places as desired, subject to compiler rules (For example, if your snippet attempts to set bereq.http.cookie
you cannot include that snippet in the vcl_recv
subroutine, because bereq
is not available in the vcl_recv
scope. See VCL variables for more details).
Custom VCL
Custom VCL allows you to upload a full VCL source file, which will entirely replace the one that would otherwise be generated by Fastly. To make sure that features you create using VCL generative objects still work, we require that custom VCL files include Fastly's code macros, one in each subroutine.
We recommend that you start from the following boilerplate, which includes all the required Fastly macro placeholders and also presents VCL subroutines in the order in which they are executed.
sub vcl_recv {#FASTLY recv
# Normally, you should consider requests other than GET and HEAD to be uncacheable # (to this we add the special FASTLYPURGE method) if (req.method != "HEAD" && req.method != "GET" && req.method != "FASTLYPURGE") { return(pass); }
# If you are using image optimization, insert the code to enable it here # See https://www.fastly.com/documentation/reference/io/ for more information.
return(lookup);}
sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host;#FASTLY hash return(hash);}
sub vcl_hit {#FASTLY hit return(deliver);}
sub vcl_miss {#FASTLY miss return(fetch);}
sub vcl_pass {#FASTLY pass return(pass);}
sub vcl_fetch {#FASTLY fetch
# Unset headers that reduce cacheability for images processed using the Fastly image optimizer if (req.http.X-Fastly-Imageopto-Api) { unset beresp.http.Set-Cookie; unset beresp.http.Vary; }
# Log the number of restarts for debugging purposes if (req.restarts > 0) { set beresp.http.Fastly-Restarts = req.restarts; }
# If the response is setting a cookie, make sure it is not cached if (beresp.http.Set-Cookie) { return(pass); }
# By default we set a TTL based on the `Cache-Control` header but we don't parse additional directives # like `private` and `no-store`. Private in particular should be respected at the edge: if (beresp.http.Cache-Control ~ "(?:private|no-store)") { return(pass); }
# If no TTL has been provided in the response headers, set a default if (!beresp.http.Expires && !beresp.http.Surrogate-Control ~ "max-age" && !beresp.http.Cache-Control ~ "(?:s-maxage|max-age)") { set beresp.ttl = 3600s;
# Apply a longer default TTL for images processed using Image Optimizer if (req.http.X-Fastly-Imageopto-Api) { set beresp.ttl = 2592000s; # 30 days set beresp.http.Cache-Control = "max-age=2592000, public"; } }
return(deliver);}
sub vcl_error {#FASTLY error return(deliver);}
sub vcl_deliver {#FASTLY deliver return(deliver);}
sub vcl_log {#FASTLY log}
Constraints and limitations
VCL services are subject to the following restrictions or limits:
Item | Limit | Implications of exceeding the limit |
---|---|---|
URL size | 8KB | VCL processing is skipped and a "Too long request string" error is emitted. |
Cookie header size | 32KB | The cookie header will be unset and Fastly will set req.http.Fastly-Cookie-Overflow = "1" , then run your VCL as normal. |
Request header size | 69KB | Depending on the circumstances, exceeding the limit can result in Fastly closing the client connection abruptly, the client receiving a 502 Gateway Error response with "I/O error" in the body, or receiving a 503 Service Unavailable response with the text "Header overflow" in the body. |
Response header size | 69KB | A 503 error is triggered with obj.response value of "backend read error". This error can be intercepted in vcl_error . See Fastly generated errors to learn about all synthetic errors generated by Fastly. |
Request header count | 255 | VCL processing is skipped or aborted if in progress, and a response with "Header overflow" in the body is emitted. A number of headers are added to the request by Fastly, so the practical limit is lower, but is not a predictable constant. Assuming a practical limit of 200 is safe. |
Response header count | 96 | VCL processing is skipped or aborted if in progress, and a response with "Header overflow" in the body is emitted. A number of headers are added to the response by Fastly, so the practical limit is lower, but is not a predictable constant. Assuming a practical limit of 85 is safe. |
req.body size | 8KB | Larger requests will have an empty req.body , so request body payload is available in req.body only for payloads smaller than 8KB. |
Surrogate key size | 1KB | Requests to the purge API that cite longer keys will fail, so in practical terms it is useless to tag content with keys exceeding the length limit. |
Surrogate key header size | 16KB | Only keys that are entirely within the first 16KB of the surrogate key header value will be applied to the cache object. |
VCL file size | 1MB | Attempts to upload VCL via the API will fail if the VCL payload is larger. |
VCL total size | 3MB | Attempts to upload VCL via the API will fail if the VCL payload would cause your total service VCL to be larger than this. |
restart limit | 3 restarts | The 4th invocation of the restart statement will trigger a 503 error. This error can be intercepted in vcl_error . |
Dictionary item count | 1000 | Attempts to create dictionary items will fail if they exceed the limit. Contact Fastly support to discuss raising this limit. |
Dictionary item key length | 256 characters | Attempts to create dictionary items will fail. |
Dictionary item value length | 8000 characters | Attempts to create dictionary items will fail. |
WARNING: Personal data should not be incorporated into VCL. Our Compliance and Law FAQ describes in detail how Fastly handles personal data privacy.
Where to learn more about VCL and Varnish
Fastly's Developer Hub provides a reference of Fastly VCL for programming custom edge logic on VCL services. You can also discover more about learning to build on the Fastly platform using VCL and the current best practices involved.
The official Varnish documentation is a good place to start when looking for online information. In addition, Varnish Software, who provides commercial support for Varnish, has written a free online book.
Related content
- The return state from
vcl_log
simply terminates request processing.↩ - Returning with
return(deliver)
fromvcl_fetch
cannot override an earlier pass, butreturn(pass)
here will prevent the response being cached.↩ - The
return(pass)
exit fromvcl_pass
triggers a backend fetch, similarly toreturn(fetch)
invcl_miss
but the altered return state is a reminder that the object is flagged for pass, so that it cannot be cached when processed invcl_fetch
.↩ - The only possible return state from
vcl_hash
ishash
but it will trigger different behavior depending on the earlier return state ofvcl_recv
. The defaultreturn(lookup)
invcl_recv
will prompt Fastly to perform a cache lookup and runvcl_hit
orvcl_miss
after hash. Ifvcl_recv
returnserror
, thenvcl_error
is executed after hash. Ifvcl_recv
returnsreturn(pass)
, thenvcl_pass
is executed after hash. The hash process is required in all these cases to create a cache object to enable hit-for-pass.↩ - All return states from
vcl_recv
(exceptrestart
) pass throughvcl_hash
first.return(lookup)
andreturn(pass)
both move control tovcl_hash
but flag the request differently, which will determine the exit state fromvcl_hash
.↩