Revenir au blog

Follow and Subscribe

Disponible uniquement en anglais

Cette page n'est actuellement disponible qu'en anglais. Nous nous excusons pour la gêne occasionnée, merci de revenir sur cette page ultérieurement.

Multi-CDN: A Critical Decision for a Resilient Architecture

Ozgur Savas

Sr. Director, Sales Engineering, EMEA

When a cloud platform goes down, it’s not theoretical – it’s revenue, SLAs, and customer trust on the line. Recent high-profile outages have reminded companies that dependence on a single vendor can introduce real operational risks. As businesses scale globally and downtime directly impacts revenue and trust, multi-CDN has become a hot topic as a critical architectural strategy to improve resilience and performance.

A multi-CDN strategy helps organizations build a faster, more resilient content delivery layer by combining redundancy, performance optimization, and cost efficiency. By distributing traffic across multiple CDNs, companies can more seamlessly reroute around outages or regional bottlenecks, maintaining uptime while reducing the risk and cost of downtime. Intelligent traffic allocation ensures users are always served through the fastest available path, delivering consistently high-performance experiences worldwide. More companies are adopting multi-CDN as a design best practice to reduce single-vendor risk and to keep traffic flowing to account for various link, node, and site incident scenarios. By distributing requests across providers, they gain redundancy, more control over performance, and flexibility in how they manage cost.

The Fastly platform is designed from the beginning to handle any multi-CDN complexity, offering support for both straightforward and advanced deployment models without requiring a single operating method. Whether customers choose a lightweight DNS setup or sophisticated traffic engineering, Fastly provides the flexibility needed to make multi-CDN work smoothly. In this blog, we’ll briefly explain different design options available for companies to start building the resilient architecture that their business requires.

Multi-CDN Design Methods 

We recognize that transitioning to a multi-CDN architecture can seem daunting. In order to simplify both the understanding and deployment, we have broken down the design methods that can be used to migrate to multi-CDN into a few categories:

  • DNS-based steering (traffic splitting)

  • Layer 7 traffic management 

  • CDN chaining (origin shielding) 

  • Client-side steering

Let's delve into the specifics of each of these methods and how Fastly is uniquely positioned to support and enable them.

DNS-Based Steering (Traffic Splitting)

This is the most fundamental and common method for implementing Multi-CDN architectures. It relies on the Domain Name System (DNS) to act as the traffic switch, deciding which CDN provider a client should connect to before the client ever attempts to establish an HTTP connection.

When a user seeks to access a hostname, like blog.fastly.com, the request is eventually directed to the authoritative nameserver managed by a DNS provider. This nameserver is the crucial point for traffic steering, as it processes request metadata to determine the traffic destination. Based on the chosen traffic splitting strategy, such as weighting or Geo-based methods, the authoritative nameserver responds with a CNAME record. This record points to the hostname of the selected CDN provider, for instance, Fastly or another CDN endpoint. Commonly used geo-based splitting approach relies on analyzing the source IP or using anycast to estimate the user's geographical location and context for its routing logic.

The pseudo-code below illustrates a straightforward example of DNS-based traffic splitting logic, specifically utilizing geographical location as the parameter.

country = getCountry(request) 

// Infer user country from resolver IP

if country == "FR":

             route_to("CDN_FR")

 // If user is in France

// Route to France-optimized CDN

elif country == "DE":   

           route_to("CDN_DE")

// If user is in Germany 

// Route to Germany-optimized CDN

else:                                            

         route_to("CDN_DEFAULT")

// All other locations

// Route to global/default CDN

Companies sometimes choose to divide traffic using a randomized distribution based on assigned weights or a simple round-robin approach. The pseudo-code below illustrates the logic for DNS-based splitting using a randomized distribution. Alternatively, companies can opt for round-robin selection instead of random picking.

// Weights in percentages

CDN_A = 70 

// Percentage of traffic assigned to CDN_A

CDN_B = 30 

// Percentage of traffic assigned to CDN_B

r = random(1, 100)

// Random number 1-100 used to probabilistically select a CDN

if r <= CDN_A:  

 route_to("CDN_A")

// ~70% of the time

// Return DNS response pointing to CDN_A

else: 

route_to("CDN_B")

 // Remaining ~30% of the time

// Return DNS response pointing to CDN_B

The weightings assigned to CDNs by the DNS provider can be dynamically tuned using APIs if the system permits. This adjustment is typically based on the CDNs' health or performance, which is determined by results from client RUM (Real User Monitoring) or synthetic testing tools (such as Catchpoint).

Another common method to separate traffic on the DNS level is by the content type. Highly cacheable static assets (such as images, JavaScript, or CSS) can be served from one subdomain and routed to a less proficient CDN which is designed for basic caching lacking capabilities such as real-time edge logic, edge compute, fine grained cache control and instant purging capabilities, while more complex or personalized dynamic content can be delivered from another subdomain and sent to an advanced CDN provider. This is where Fastly stands out in a multi-CDN setup. Fastly is highly effective at dynamically caching content that is challenging for traditional CDNs to handle efficiently, due to its instant purge capability, which invalidates cached content globally almost immediately, ensuring updates, fixes, or removals take effect for users right away. This is especially useful for APIs where responses change frequently, and stale data can cause errors.

Getting started with multi-CDN using DNS-based traffic splitting can be straightforward, making it an accessible entry point for many companies. By leveraging the authoritative DNS to steer requests based on weights, geography, or content type, organizations can immediately distribute traffic across multiple CDNs without complex integration at the application layer. However, it’s important to note that not all DNS providers support traffic-splitting features like weighted distributions or geo-based routing. Before implementing a multi-CDN strategy, companies should confirm with their DNS provider whether such capabilities are available; if not, choose a vendor that would offer those capabilities.

Layer 7 Traffic Management 

Unlike DNS-based traffic steering, which operates at the network layer (OSI Layers 3/4) using IP addresses and hostnames, Layer 7 steering works at the application layer. In this approach, DNS resolves to the IP of a traffic controller acting as a decision engine, which can be a dedicated traffic controller or edge compute instance that analyzes the request before directing the client to the optimal CDN. Once the TCP and TLS handshakes are complete, the controller can decrypt the HTTP request and inspect it in detail. This includes examining HTTP headers (User-Agent, Referer, X-Forwarded-For), URL paths (for example, distinguishing /api/ versus /vod/), cookies, and query parameters for session stickiness. This enables far more granular control over traffic compared to DNS-only splitting, as routing decisions can consider headers, URLs, cookies, and even session-specific information.

Once the traffic controller has full visibility into the request, the controller applies real-time decision logic. It can query a live database of network conditions, leverage RUM (Real User Monitoring) data, and even measure the latency of the client connection during the handshake to make instant routing decisions.

Traffic steering from the controller can be accomplished through two main methods:

  1. HTTP Redirection: This approach instructs the client to reconnect to a specific CDN. However, it introduces an extra round-trip and an additional DNS resolution, which can result in increased latency.

  2. Reverse Proxying: In this method, the controller fetches the content from the chosen CDN and streams it back to the client while keeping the initial connection open. This strategy is frequently employed in Fastly Compute environments to efficiently execute edge logic using WASM serverless functions.

An alternative and more common method of using Layer 7 traffic controller would be to use the traffic controller not directly in the DNS path, but instead to influence the DNS via API updates, effectively creating a feedback loop between the L7 layer and the DNS layer. In this scenario, Layer 7 controller constantly polls metrics, either from RUM data, synthetic tests, or CDN health APIs. Based on thresholds or trends, it regularly updates DNS weights, even if no client traffic has recently reached the controller. Traffic split decision stays with the DNS provider.

As you can see, sophisticated and highly granular traffic engineering is possible using Layer 7 traffic controls, either independently or in combination with DNS-based splitting. When an organization decides to adopt a multi-CDN strategy, they frequently begin with DNS-only splitting and subsequently integrate traffic control features to enhance the architecture.

CDN Chaining (Origin Shielding) 

This architecture is distinct from traditional multi-CDN setups because it doesn’t primarily split traffic between CDNs. Instead, it chains (stacks) CDNs together to protect the Origin Server from excessive load. The idea is to introduce an intermediate, centralized CDN layer that acts as a buffer between the edge CDNs and the origin.

When a client requests content, authoritative DNS can make a decision to pick one of the edge CDNs, as explained in DNS-based traffic splitting. If that edge CDN has a cache miss, it doesn’t immediately fetch the content from the customer’s origin. Instead, it is configured to use a shield CDN as its origin. This shield layer checks its own cache, which is usually larger and more centralized than the edge caches, and consolidates requests before reaching the origin.

Fastly Media Shield is often deployed to establish an effective multi-CDN architecture through a CDN chaining configuration. Media Shield is designed to optimize multi-CDN architectures by acting as a powerful, centralized origin layer that dramatically reduces origin load while preserving high performance. By placing Media Shield behind your CDNs and pointing it to your infrastructure, you consolidate identical requests (request collapsing), turn cache misses into cache hits, and significantly cut origin bandwidth, processing, and egress costs. Serving as a shield through a designated Fastly POP, it protects origins from traffic spikes (thundering herd), improves cache hit ratios, and ensures better availability during high traffic or live events. Media Shield also enhances the quality of experience by preventing origin disruptions, providing real-time logs for rapid troubleshooting, and enabling instant global purges within milliseconds, giving teams tighter control, better performance, and a more cost-efficient multi-CDN setup.

In a CDN chaining setup, companies use Media Shield as the origin for the edge CDNs. However, the origin server is also designated as a secondary origin, serving as a fallback in the rare event of a shield outage.

Client-side Steering

Client-side steering shifts the multi-CDN control plane directly to the user’s device, making it the most deterministic traffic steering method, because decisions are based on the client’s real, actual view of the network at that moment in time, rather than a predetermined CDN traffic split decision made on a remote decision point, such as the authoritative DNS or a traffic controller. Client-side steering puts the power of choice directly on the user’s device. 

This multi-CDN strategy employs a lightweight client-side SDK (such as a JavaScript library for web browsers or a full SDK for mobile applications) embedded directly within the application or page. Its function is to direct the client's traffic to a specific CDN, which is selected from a predefined list according to given criteria. Although this method introduces operational complexities, it ensures results that are both deterministic and predictable.

Client-side steering is a highly common practice, especially among media and streaming providers, to ensure high-quality, reliable video delivery. Using a small code embedded inside the video players, they can constantly monitor the network and dynamically switch between CDNs based on real-time performance, resulting in reduced buffering and optimized costs.

As we have covered so far, when designing a multi-CDN architecture, companies have a wide range of options to choose from. The setup may be as simple as using DNS to split traffic by fixed percentages, or as advanced as incorporating Layer 7 traffic management, CDN chaining, and client-side steering. Irrespective of the chosen delivery method, a key consideration would have to be the Security of the traffic flowing through the edge networks.

Securing Multi-CDN

Managing security is often considered the most complex aspect of a multi-CDN strategy. A critical risk arises if your edge networks lack advanced security controls and flexible deployment models, as you may end up with inconsistent security policies across platforms. This inconsistency can expose you to attacks that one network might block, but another will miss. Fastly offers unparalleled flexibility in deployment models and sophisticated security controls, making it a standout choice for mitigating this risk.

In a multi‑CDN world, security has to be as flexible as your delivery architecture. If you are using basic DNS-based traffic splitting using weights or CNAMEs, you may prefer for each CDN in your stack to run its own edge security controls, with independent WAF, bot protection, and DDoS mitigation. However, that quickly creates a fragmented landscape where you are responsible for keeping policies aligned and consistently enforced across providers. Each additional rule creates even more cascading complexity. When under attack, simplicity, time to mitigate, and confidence are paramount; a fragmented landscape can potentially be exploited by attackers. 

A more streamlined option would be to centralize protection by using Fastly for consistent security, even when multiple CDNs are in place. If you utilize the Fastly Media Shield within a CDN chain, you can leverage Fastly Edge security to consolidate and protect all traffic. This approach allows for distributing edge rate limiting and DDoS protection capabilities across the edge CDNs while centralizing key security functions such as Next-Gen WAF (NGWAF), API Protection, and Bot Management on the Media Shield POP.

For organizations that prefer to anchor enforcement closer to their applications, deploying the NGWAF directly at origin infrastructure allows them to maintain a unified security posture and shared rule set, regardless of which CDN happens to serve the request, even during an outage at the edge or attacker bypass of one of the CDNs. Fastly NGWAF can be deployed directly on the origin by running the agent inside a container or a virtual machine alongside the application. This positioning allows the WAF to inspect traffic as close to the application as possible. As a result, it provides deep visibility and protection, even when traffic moves across various CDN edges. This placement also enables the inspection of east-west traffic, which proves highly beneficial for development teams by enabling the monitoring of internal applications and endpoints that do not utilize CDN. 

This powerful deployment method, which is unique to Fastly, can also potentially be used by deploying Fastly NGWAF agent directly on the compute of other edge platforms in a sidecar configuration. This approach allows these platforms to unify their security posture within Fastly. Whatever the deployment option used, Fastly offers a unified console to manage all deployment models from a single pane of glass.

Getting Started with a Multi-CDN Strategy

If you're looking to mitigate the impact of CDN vendor outages or want to enhance your delivery and security for maximum performance and cost efficiency, now is the ideal time to explore a multi-CDN strategy. In your journey to multi-CDN, Fastly offers the most advanced and flexible delivery and security capabilities, many of which are unique. Whether your goal is a sophisticated multi-CDN architecture, a more resilient infrastructure, or simply a 'just-in-case' insurance policy via a basic DNS configuration, Fastly is here to support you.

Reach out to us and let our experts collaborate with you to determine the best solution for your business needs.

Prêt à commencer ?

Contactez-nous dès aujourd’hui