Queuing / Waiting room (JS)

Park your users in a virtual queue to reduce the demand on your origins during peak times.

Fastly Compute

Use this starter

Using the Fastly CLI, create a new project using this starter somewhere on your computer:

$ fastly compute init --from=https://github.com/fastly/compute-starter-kit-javascript-queue

Or click the button below to create a GitHub repository, provision a Fastly service, and set up continuous deployment:

Deploy to Fastly


  • Park visitors in a queue to limit the amount of active users on the website ⏳
  • Ship queue analytics to log endpoints 🔎
  • Allow certain requests such as robots.txt, favicon to bypass the queue 🤖
  • No client-side scripting required ⚡️

Getting started

  1. If you haven't already, sign up for Upstash and create a Redis service.
  2. Initialize a Compute project using this starter kit.
    fastly compute init --from=https://github.com/fastly/compute-starter-kit-javascript-queue
  3. Create the upstash backend, changing the default hostname to the one provided in the Upstash console.
  4. Create the protected_content backend by accepting the default example host or setting your own.
  5. Populate the config config store by following the prompts to configure Upstash and set a secret for signing cookies.
  6. Run fastly compute publish to deploy your queue.

Understanding the code

This starter is fully-featured, and requires some dependencies on top of the @fastly/js-compute npm package. The following is a list of the dependencies used:

  • @upstash/redis - a REST-based Redis client for storing queue state. You could easily swap this for your own storage backend.
  • jws - a library for generating and validating JSON Web Tokens.

The starter will require a backend to be configured to send requests to once visitors have made it through the queue. For demonstration, the default is a public S3 bucket with some assets and an index page.

The template uses webpack to bundle index.js and its imports into a single JS file, bin/index.js, which is then wrapped into a .wasm file, bin/index.wasm using the js-compute-runtime CLI tool bundled with the @fastly/js-compute npm package, and bundled into a .tar.gz file ready for deployment to Compute.


When a visitor makes a request for the first time, we generate a signed JWT containing their position in the queue, which is determined by fetching the current queue length from Redis (INCR queue:length). This signed JWT is sent back to the visitor as an HTTP cookie.

On a regular basis, the current queue cursor is updated in Redis (INCR queue:cursor). This effectively lets a user in. To show a visitor how many others are in front of them in the queue, we subtract the current queue cursor from the position saved in their JWT.

On subsequent requests, when a JWT is supplied, we verify the signature and extract the position from the JWT. If the current queue cursor is higher than the user's signed position, they will be allowed in.

Next steps

This page is part of a series in the Rate limiting use case.

Starters are a good way to bootstrap a project. For more specific use cases, and answers to common problems, try our library of code examples.