Industry Edge Cloud Strategy Report - 2024

Edge cloud strategies for gaming

Network servicesEdge cloud platformComputeSecurityEdge Computing

The gaming industry faces a combination of challenging problems, all rolled into one. Gaming platforms and studios are playing on nightmare mode when it comes to optimizing performance, security, and user experiences.

On this page


Gaming companies have it hard. The platforms have to distribute content as efficiently as the largest entertainment and software companies do. The studios have to optimize their applications and APIs for in-game and game-related experiences to perform as seamlessly as your favorite social media platform. Sometimes studios are owned by platforms and part of the same company, and then typically their in-game purchases and add-ons constitute a large portion of their overall revenue, so they need to operate a world-class ecommerce platform as well. 

In this report we’ll walk through the challenges facing gaming platforms, studios, and developers, and the ways an edge cloud strategy could help improve productivity, reduce costs, and improve security while offloading many complications to the network edge. 

An edge cloud strategy can help gaming companies focus on their games, their communities, and the work that is most important to their success.

3 ways that performance plays a role

1. Optimizing large downloads across cost and speed
For content downloads the aim is to provide ‘good enough’ download performance at an effective price, and to avoid experiences that are terrible at all costs. A user doesn’t get angry if it takes 10 minutes to download instead of 8, but they get furious if it takes 6 hours to download. Optimize for ‘good enough’ experiences that are cheaper rather than struggling for maximum download speeds that incur higher costs. 

2. Instant experiences on game-related applications
Performance is critical for in-game and game related applications and APIs, just like it is for other user experiences around the internet. A good user experience will keep users coming back to a game and engaging with everything related to it. Experiences that feel instant are extremely advantageous when trying to build a community, not just around the core gameplay, but the features and extended experiences built around it. The probability of a visitor bouncing off a page increases 32% as the page load time goes from 1 second to 3 seconds (source). Users will be more likely to re-engage if their experiences with game-related applications and community interactions are faster and smoother. 

3. Ecommerce optimization in marketplaces
In-game spending is essentially its own ecomm industry inside of the gaming vertical and also needs to be optimized in all of the same ways an ecomm or traditional purchasing funnel would be. Bounce rate here isn’t just about community building or user experience, it’s also about lost revenue. 

4 ways that security plays a role

1. A WAF that works, and works at the edge
A “WAF” is a web application firewall, used to monitor and filter traffic to web applications and APIs to block malicious or dangerous traffic, while allowing legitimate traffic through. A gaming company usually runs a complicated suite of applications. There are the games themselves, but then there are also websites and various in-game applications. There can also be a ton of game-related sites and applications that are outside of the game itself, like forums, live chats, leaderboards, and community sites, and the huge revenue drivers of the marketplaces, both in and outside of the gameplay. 

All of these applications and APIs need to be secured without bringing application developers to a grinding halt. In theory WAFs can do this, but not all WAFs are effective, and an open secret in security circles is that many orgs purchase a WAF and leave it in logging mode, never actually turning the blocking mode on. Another problem is having a slew of different WAFs because they all have limitations of what they can cover. Some can’t do on-prem; others are limited to operating within a certain environment, and others can’t integrate with modern CI/CD workflows to ensure all updates and changes don’t interfere with your security coverage. Finding one WAF that works everywhere it’s needed, handles most of the security checks efficiently at the network edge, and integrates into the workflows that are already being used can take an organization from having lots of problems (and merely logging them) to preventing them all together. 

2. DDoS is a constant threat in gaming
A lot of gaming companies are popular, with well known games that have a lot of users. This makes them an attractive target for DDoS attacks. Sometimes it might not even be malicious, with the potential for huge traffic spikes around new features or map launches, new downloadable content (DLC), or new game titles. These overwhelming traffic events might not always impact a part of their infrastructure that impacts gameplay, but there are many other vulnerable areas that could cause huge problems for an organization. So much revenue is generated from marketplaces and in-game purchases that a DDoS that shuts down transactions can significantly affect the company’s bottom-line. 

3. Fraud and account takeover
At the account level, both fraud and account takeovers (ATOs) are a major concern. It will be said over and over in this report, but the marketplaces inside of many games are a major revenue driver for the gaming companies, so anything that interferes with that profitability is a serious problem. The other side of that equation is that there is a very real monetary incentive for malicious actors to try to exploit these systems because there is real money at stake. This can include anything from credit card and gift card fraud to the use of bots to drive fake game registrations, account takeover attempts, and continuously probe for new ways to exploit the system. 

4. Downtime, resilience, and availability
Once upon a time gamers played one at a time, or maybe with a second player standing next to them at the arcade, or sitting next to them on the couch. We’re in a massively multiplayer world now, and site reliability engineers have a lot to look out for. The multiplayer gaming network has to remain available and with very low-latency, and the waiting rooms and matching systems have to keep people engaged. Outside of gameplay there can be a lot of other dependencies to look out for, and it’s important to have solid security across all properties, not just where the gameplay occurs. 

If an authentication system relies on a central service then a successful attack on an organization’s website could bring down all online games and limit an entire community to offline-only. Bots and other malicious actors are ready to throw malware, phishing, ATOs, and other attack vectors across every bit of exposed technology they can find, so everything needs to be covered, even if it seems like it’s of lower importance. Attacks can start anywhere, even seemingly innocuous parts of your footprint, and end up causing serious amounts of downtime, lost revenue, and user dissatisfaction. 

Optimization challenges for the gaming industry

The challenges faced by the gaming industry aren’t completely unheard of. Other companies like media streamers and software providers have to send out large chunks of data to customers. And ecommerce sites have to manage products and inventory. And social media applications have to manage community interaction and intense amounts of personalization. The crazy part about the gaming industry is that they have to do all three of these with expert execution, and all at the same time. Here’s a quick (and far from exhaustive) list of challenging areas for the gaming industry. 

Maximize origin offload
Gaming platforms, who are responsible for game distribution, need to ruthlessly optimize to serve as much as possible from their CDN’s cache. This means they can scale down the number of servers and scaling capacity they have to manage on their own. Solving that reduction in complexity helps with everything from reduced capital expenditures on hardware, and a smaller technology footprint to manage and secure, and even better developer productivity. It also means significant reduction in egress charges for data transfers; these reductions can be sizable and have a meaningful impact on overall finances. 

Delivery optimization for large payloads
Delivery optimization for the gaming platforms that distribution game downloads to users is about the delivery of large payloads for speedy delivery at low cost. This is different than performance challenges of many other organizations because the goal isn’t to be as fast as possible. A user doesn’t care that much if their DLC takes 10 minutes instead of 8 minutes. But it’s a disaster if it takes an hour. At the same time, with so much content to deliver the cost has to be kept as low as possible, so the goal is to manage everything toward “good” delivery performance at a “great” cost. 

App and API security, including DDoS and bot protection
For both gaming platforms and game studios, it’s not just about the gameplay itself. There are also tons of applications, sites, and APIs for experiences within, connected to, or related to a game that must be kept secure against a variety of security threats. There’s a constant threat of DDoS attacks or malicious actors probing for weaknesses, but because of the in-game commerce mechanics games often represent functioning marketplaces where actors can steal or appropriate items and sell them for a profit. This means that there is a ton of activity with things like account takeover attempts, fraudulent activity, stolen credit card usage, and every method under the sun to exploit the system or other players for valuable items. Companies that offer a variety of gaming experience ranging from console to apps to web-based, plus the in-game and related applications have many types of applications and APIs to secure. Many WAF vendors have limited deployment models, so it becomes a nightmare to manage multiple WAFs that operate differently in different contexts, and impossible to deploy policies at a global level and know they’re integrated into every new code deployment.

Application performance
Application, site, and API performance also needs to be optimized for the best, fastest user experience possible. Especially in gaming contexts, developers are aiming for experiences that feel instant and responsive, and with in-game purchasing as an important revenue driver, they need to optimize performance with the same rigor as ecommerce platforms who understand that slow load times result in lower conversion rates and significant negative impacts on revenue. A 0.1-second improvement of mobile site speed increases conversion rates by 8.4% for retail sites. (Source), and improved speed with which a site or in-app purchasing experience is delivered has a positive impact at every stage of a mobile funnel from a product listing page to completed order (Source).

In-game spending has grown to be essentially its own ecommerce industry inside of the gaming vertical, and even small performance improvements make a big impact on completion rates and revenue. As stated in Variety’s VIP+ 2024 The State of the Video Game industry report: “Alongside revenue from its mega-popular “Grand Theft Auto” online service, Take-Two saw nearly 80% of its third-quarter earnings in calendar 2023 come from microtransactions, a feat achieved through its hefty $12.7 billion purchase of Zynga in 2022. Even Sony derived nearly 25% of PlayStation revenue from “add-on content” in its last reported earnings.” (Source)

As it’s the case in other industries, the responsiveness and responsive user experiences takes on even more importance in a saturated market where a product is surrounded by other high quality alternatives. There’s no room for sub-par experiences (or latency for that matter). To quote Newzoo from their “Games Market Trends 2024” report: “just 19 games eat up approximately 60% of playtime, with the top 33 games accounting for three-quarters of overall playtime.” (Source)

Application development
Application development is its own huge area of challenges, and yet with gaming companies it’s just one of many. It’s difficult to enable app devs to speed up their development cycles without compromising on security or racking up huge charges because of inefficient storage or processing practices. Sometimes DevOps doesn’t have the tools they need to provide the freedom to move quickly to the app devs. Sometimes SecOps doesn’t have the tools they need to feel confident in the security built in to the organization’s app development processes. Disjointed point solutions and legacy tools that don’t integrate with existing CI/CD workflows create big DevSecOps headaches, and those flow downstream and present points of friction in application development.

Instant global scaling
The market for games is global, and while this is great when you’re looking to build a community of players, it also brings on new challenges for global scalability and reliability. As the architecture at origin has to manage the challenges of scaling as new players come online, or as servers get crowded, or matching users and serving their content, it gets more and more complicated and costly to be prepared for spikes and edge cases to ensure availability. 

An edge cloud strategy for gaming

Facing the challenges of four or more distinct industries at once is hard, so it makes sense to look for solutions that can address a wide variety of these challenges and integrate into all of the different workflows across the organization. Taking advantage of the network edge is a smart place to start because it helps with latency, content delivery optimization, improved security, and cost savings. An edge cloud strategy can help address all of these pain points, but it’s important not to make the architectural mistakes on the edge that create all of your headaches at origin. Point solutions that exist on different edge networks and get cobbled together don’t bring the same benefit as a more unified approach.

Edge cloud platform benefits
You shouldn’t think about addressing CDN services, edge security improvements, and edge computing without unifying them on a single network. Not all edge cloud is created equal. For example, a company might offer edge security, edge computing, and a CDN network under the umbrella of a platform, but what’s really under the hood are three completely separate networks that are basically duct taped together under a brand name. Platform benefits are also unavailable when engaging with point solutions like a WAF vendor who doesn’t offer other edge capabilities. In this case your WAF is operating separately from the other potential activities at the edge that are important to your organization. Optimizations across security and edge computing, or security and CDN services are much more limited. Some security services like DDoS protection actually benefit from protection at different layers of the network, and there are disadvantages if your DDoS protection is split between a CDN operating at a network layer, and an unrelated WAF operating at an application layer. 

It’s important to select solutions that actually run together on one network in order to get the performance and savings benefits necessary to make an edge strategy successful. As a bonus, you also get other benefits like higher developer productivity. There’s less context switching for DevOps and SecOps because more of their toolkit and reporting is under a single dashboard. There are also vendor consolidation benefits like saving time and getting more predictable budgeting, as well as the potential for meaningful savings. A unified customer support experience also means that when you run into problems across different parts of your edge services, the team you engage with for help has insight into your whole story, rather than a limited view and therefore limited ability to diagnose and correct an issue. 

But the platform efficiencies are just the tip of the iceberg. Here the some of the specific benefits that will come with selecting a unified edge cloud platform:

Enhanced delivery and network services capabilities at the edge

Maximize origin offload
Origin offload is one of the most important benefits from doing more at the network edge. Legacy CDNs can save organizations a lot of time and money, but modern edge cloud platforms can take it even further by allowing organizations to move more to the network edge and get performance benefits by locating more of their experiences closer to the end users, while also saving money by delivering it more efficiently. But every request that gets handled at the edge also means it’s one less request that your servers at origin have to handle, so you have potential savings in terms of capital expenditures by shrinking your origin, and in terms of the cost of maintenance. 

Site Reliability Engineers (SREs) are responsible for availability, so when you simplify your infrastructure at origin they can invest more of their time into work that advances an organization’s goals, rather than just putting out fires and chasing issues. Selecting a fully programmable edge cloud solution gives SREs, DevOps, SecOps, and application devs the tools to move as much as possible to the edge and configure the solutions they need. This can be anything from handling image optimization and transformation at the edge so that you only need one version of any image stored at origin, or decommissioning your whole WebSockets stack because it can be implemented more efficiently at the edge. Solutions that let you cache APIs and their dynamic responses, or storing more personalization data on the edge take you even further. Every solution that reduces the number of calls to origin benefits you in multiple ways. 

Good enough performance at a great price
For a gaming company, it’s not just about selecting specific operations to move onto an edge cloud platform. It’s also about optimizing the delivery of large files. The gaming industry distributes a lot of content like game downloads, DLC, tuning packages, etc. We’ve mentioned that in these cases delivery optimization doesn’t necessarily mean as fast as possible, but instead it means reasonably fast, and as cheap as possible. But what you definitely don’t want is slow and expensive. Preventing the worst possible outcome isn’t just about optimization, it’s also about observability. Selecting a platform that offers real-time logging, alerts, and observability is critical. Legacy CDNs often restrict you to batch reporting that can take a long time, and even require professional services charges to obtain. Solutions that are built for real-time performance in the delivery of content should also provide access and visibility into what’s happening in real-time. Without it you risk waiting around for a batched data report while users get angry and elevated bandwidth costs are burning a hole in your bottom line. 

Shielding, request collapsing, and instant scaling for large payloads
The large files distributed by gaming companies are often identical across the user base, so they have a lot of potential to be highly optimized. An edge cloud that exposes a fully programmable platform means that the network can do more to help you scale better while preventing requests from ever needing to hit your origin by offering the maximum amount of shielding and request collapsing. In an ideal world you would like to see one call to origin for any piece of content like this, and then every subsequent download is delivered from the edge. Sometimes that’s impossible, but you want every advantage to get as close to that as possible, because the egress charges and other costs of having to serve large chunks of content like that add up quickly, and a lot of it is avoidable… it’s like first-person shooting yourself in the foot. 

Legacy CDNs have limited capacity to optimize requests. They can usually offer some amount of shielding where, when a request comes in, some of the CDN network’s Points of Presence (POPs) call to another POP on the network that already has the content rather than calling to origin. However, these solutions usually offer zero or limited configuration, and they only offer limited shielding. The POPs can’t intelligently coordinate with one another, so your origin is always getting hit, and sometimes it’s still getting hit a LOT. 

A modern edge cloud platform will offer more than a CDN – it will allow for advanced settings, configurability, and even programmability if needed. You can do things like designate one, single POP on the network as the primary shield so that all other POPs know to go there with requests instead of calling back to origin. You can also perform “request collapsing” where POPs will collect and bundle multiple requests together to further reduce load. 

Here is a comparison of the capabilities offered by an edge cloud platform compared to the limitations you’ll find with a legacy CDN for handling more requests at the edge and maximizing the shielding of your infrastructure at origin.

Modern Edge Cloud capabilities

Legacy CDN limitations

Fewer, more powerful POPs

A faster edge network running fewer, higher capacity POPs can reduce the number of requests needed to distribute a piece of content. “Request collapsing” can reduce it even further, but even without collapsing turned on, fewer POPs can mean fewer requests.

Networks with huge numbers of smaller POPs will necessitate more requests, and also cycle content out faster due to shorter lifetimes. More calls to origin are needed to distribute a piece of content, especially if efficient request collapsing is not possible.

Single POP offload designation

Allow designation of a single POP (instead of your origin) as the source for the rest of the network to pull from. You can save big on egress by further reducing requests to origin, and scale down the size and cost of your origin when peak traffic is handled better at the edge.

If they cannot designate a single shielding POP it forces many POPs will always have to pull from your origin to serve the rest of their network. This increases your need to absorb higher egress charges and higher traffic peaks. 

Maximum control and configurability

Fully configurable shielding solutions rather than just an on/off switch for a feature that does a limited amount of optimization.

Has an of/off for shielding without configurability. If you can’t configure the network to match your needs, then you can’t truly optimize. 

Geographic configurability

Configurability for geographic location optimizations so that the shielding POP for a given piece of content can be the one closest to where most of its audience is located.

Cannot support geographic configurations so that it’s a crap-shoot as to what portion of your global traffic gets lower latency experiences while aggregate latency suffers.

Real-time logging, alerts, and observability

Provides real-time logging, alerts and visibility so that you can confirm that everything is behaving normally in real-time, and identify and fix problems as soon as they present themselves.

Cannot provide realtime logging, alerts, or observability, so you can’t see performance information in realtime or diagnose problems as they occur. This can lead to inefficient and expensive operations that persist until you can gather reports, diagnose, and test solutions.

Fewer headaches and less management at origin

Handle optimization for you on the platform so that you don’t have to change much at origin to get maximum offload and efficiency.

Cannot optimize their network for you, leaving more of the engineering, management, and tedious maintenance work at origin.

Enhanced Security at the edge

Edge cloud platform benefits for security
As edge networks have matured, it has become a smart strategy to move security to the edge for many industries, not just gaming. But not all edge networks are the same, and not all edge security solutions bring the same benefits. It’s important to be careful not to recreate the same pitfalls for complexity and tech debt on the edge that you find yourself restricted by at your origin. You can gain efficiencies by distributing your security request handling across a lightning-fast edge network instead of routing them through an on-prem bottleneck, but if that solution is not well integrated to the rest of your edge activities you may lose all of those benefits in the complexity of operating across multiple networks, vendors, and dashboards. 

Security is about more than Web application firewalls (WAFs), but WAFs are a good microcosm to demonstrate this point. There are significant benefits to selecting a single WAF solution that can work across every piece of an organization’s tech footprint, from legacy servers tied to an ancient (but critical) service, or cutting edge applications with modern CI/CD practices. Managing a single WAF provided by a single vendor with unified customer support across the organization has simplification benefits all on its own, but you can also improve productivity by fostering a stronger DevSecOps relationship that can speed up application developers while trusting that they’re operating safely. 

This same pattern repeats itself if you zoom out from WAFs to your entire security solution. Do you have multiple bots solutions as well? Multiple DDoS solutions? It could all repeat again on the edge if you approach your edge cloud strategy without a platform strategy. If you have different point solutions at the edge for your different security concerns (WAF, bots, API protection, DDoS, etc), then you can’t take advantage of some of the best edge benefits. 

It gets even worse when you start to think more holistically about your security. Beyond WAFs and other security solutions, a strong security posture means looking at things better integration into your CI/CD workflows so that proper security configurations are guaranteed, or shrinking your organization’s attack surface by reducing the size and complexity of your infrastructure, and moving more of your workloads into environments that are more security by default. A multi-layer approach to security requires simplification and coordination, and consolidating into a single platform that can manage most of this on one network, under one contract, and within one dashboard can deliver outsized results. 

App and API security, with bot and DDoS protection at the edge
For application layer security, if you aren’t using a WAF that you feel comfortable taking out of logging mode and into blocking mode, get a different WAF. If you’re using a WAF that doesn't integrate with your existing tools and modern CI/CD workflows, it’s time to drop it and select one that does. If your WAF doesn’t let you test new rules safely against real traffic in a simulated environment, then it’s time to pick one that does. And if you’re using a dozen WAFs across the organization, it’s probably time to drop all of them and pick one that can deploy everywhere you need it to, consolidate vendors for a cost-savings win, and let your entire organization work under a single dashboard. Beyond all of that, you also need a WAF and broader security strategy that takes advantage of the network edge. 

Great security at the edge means selecting a WAF that leverages the network edge, but it goes well beyond that. An edge WAF solution helps in several ways. A globally distributed edge cloud can absorb huge DDoS attacks served by massive botnets because it’s absorbed across the network edge, and this service can be always-on and autoscaling to ensure your servers at origin are not hit with traffic spikes or malicious requests. Every time an organization offloads the handling of a security event or  requests to an edge cloud provider there are two wins. First, the attack is blocked and handled before it ever reaches your infrastructure, which means your infrastructure doesn’t have to scale just to meet demand caused by a spike in malicious requests. Second, because the malicious requests never even reach your origin, you’re more secure – you should block things as far away from what you care about as possible. 

In addition to blocking things far away, you should block them early. A good edge cloud solution also provides a way to block malicious actors preemptively, before they start attacking you by leveraging aggregate data about malicious activity to empower everyone behind that solution with real-time intelligence and blocking capabilities. If an IP address is blocked while attempting to send malicious attacks to one customer behind the network, then all others who opt-in can automatically start blocking that IP address as well. In this way you can start blocking malicious actors before they even attack you, because you know they’re attacking somebody. This kind of real-time intelligence benefits from the same low-latency in the edge network that ensures your content delivery happens instantly. 

Multi-layer security
It’s clear that an edge-powered approach can supercharge your security at the application layer, but the real benefits come from the ways an edge cloud platform can provide security improvements across different layers and areas of operation. Here are four examples just for starters of how a platform and multi-layer approach can be greater than the sum of its parts. 

1. Reduced attack surface area → Advanced caching and edge computing capabilities can help an organization move more content and workloads to the edge. This effectively shrinks their core infrastructure and attack surface. If you have less hardware to maintain there are fewer places where things can go wrong. And if you can move entire stacks to the edge, like decommissioning WebSockets in your own infrastructure to manage it on the edge via your HTTP infrastructure at origin, then that’s an entire technology stack that your SREs, DevOps, and SecOps teams can stop worrying about. (Not to mention the potential cost savings.)

2. Rate limiting at the edge → A powerful edge platform lets you enable smart threshold blocking and rate limiting based on any number of signals, and leveraging network intelligence to identify bad actors in real-time. Humans usually don’t make a request dozens of times within a second or two, so thresholding and rate-limiting features that are applied at the network edge can be very effective at blocking malicious attacks without triggering false positives that block actual users who are trying to perform legitimate actions. The capacity and distributed nature of an edge cloud mean that these services can be applied without adding latency to the user experience and affecting performance.

3. CI/CD integration → Modern solutions that integrate better with your CI/CD workflows can allow for security policy deployment at a global level with confidence that those policies will be integrated into every update the application developers push. We could be talking about WAF policies or content configurations just for starters, but the integrations help application developers move quickly while adhering to best practices from their DevOps and SecOps counterparts. DevOps and SecOps can give more freedom to the application developers and put them through fewer hurdles because they trust the centrally administered integrations to keep things safe and configured correctly. 

4. Real-time logging, alerts, and visibility → Real-time access to data and insights is a secret power of a modern edge cloud platform that isn’t always considered. When all of your edge cloud solutions are operated on a single network that is built for real-time performance, and it’s run within a single dashboard, you also get real-time views of what’s happening with your traffic and security. It should also provide real-time streaming into your existing tools. Legacy solutions may require a professional services engagement just to deliver batch data, which can cost significant amounts of time and money if you’re trying to diagnose a problem, or just looking for ways to optimize. Having one place to access insights across your content delivery, advanced network services, application layer and network layer security, and edge computing workloads can feel like a superpower after being stuck with solutions that act more like a black box. 

5. Consolidated customer support → When your customer support counterpart can access information and expert assistance across all of your edge cloud solutions it can mean faster, easier, and more effective solutions to your problems, and it means a counterpart who can look across all of your activities to see what could be causing an unknown issue rather than passing the buck at the border of their point solution. For example, when better security solutions are operating across the network layer and the application layer, it helps to have a single point of contact for support who can dig into your CDN configurations and your settings across your WAF, DDoS, and bot protections. 

Enhanced edge computing

Improved application performance
Applications executed at the edge, closer to gamers and their devices, will experience an instant and notable enhancement in performance due to the improved responsiveness caused by the reduced latency and instantly available edge servers, which allows for localized data processing, real-time analysis, and decision-making without relying on data residing at the data center perhaps is located far from where the gamer is located. This makes edge computing well-suited for online or on-device-but-connected gaming and is particularly advantageous for latency-sensitive applications such as augmented reality and games where the use of replaceable video feeds are part of the game play, making milliseconds matter for fluid game operations. By leveraging an edge computing infrastructure, online games can achieve new levels of performance and responsiveness, unlocking differentiating possibilities for innovation and efficiency and is particularly advantageous for latency-sensitive applications such as live video streaming or gaming where even slight delays can significantly impact user satisfaction and operational efficiency.

WebSockets are essential when it comes to improving application performance providing for real-time communication between clients and servers. The persistent, bidirectional communication that, when combined with edge computing, allows for low-latency data exchange and enables seamless interactions between players. This allows game developers to implement real-time features such as chat and game scores and can also synchronize the state of in-game characters. WebSockets can also reduce network overhead and enable gameplay that is more responsive, even in bandwidth-constrained environments. With the ability to quickly establish and maintain persistent connections, they eliminate the need for frequent polling, resulting in more efficient use of network resources and improved scalability. Overall, integrating WebSockets into game development enables developers to create immersive, interactive multiplayer experiences that are responsive, engaging, and highly dynamic.

When combined with edge computing, WebSockets offer a unique opportunity to reduce the load on the origin server, enabling it to focus on other critical tasks. Furthermore, unlike the origin, edge locations can scale dynamically to accommodate fluctuating demand, ensuring consistent performance even during peak usage periods. Transitioning WebSockets to the edge optimizes performance, enhances scalability, and delivers a more seamless real-time communication experience for players.

Game developers have continually pushed the boundaries of in-game connectivity to deliver seamless multiplayer experiences across diverse genres and platforms. At least one analyst expects this trend to continue for a foreseeable future: “We will see the launch of titles with rich social worlds and deep gameplay experiences that aim to connect communities together. Gaming will continue to grow into a social activity that connects communities from around the world together.“ (Source) 

Faster development cycles
Complex games require continuous development and updating, and gaming companies constantly find themselves searching for ways to shorten development cycles. Project and resource management only goes so far, but recent advancements in edge technologies have helped significantly speed up development cycles, performance, and productivity. Continuous Integration/Continuous Deployment (CI/CD) is one such area where an edge deployment can help with faster deployment, scalability and more. 

CI/CD practices substantially benefit a wide range of development teams, but teams with frequent code changes are likely to benefit the most and especially in areas such as automating integration and deployment processes that reduce risk of errors and ensure that changes are quickly and reliably pushed to production. Also, teams tasked with smaller application development such as web or mobile, benefit greatly from the agility brought by CI/CD, allowing them to iterate quickly, gather user feedback, and deploy updates rapidly thus helping the company stay competitive. Overall, development teams that prioritize speed, reliability, and agility in their software delivery processes stand to gain the most from adopting CI/CD practices.

Incremental releases allow you to continuously develop and implement new use cases that may influence future software updates. You can also discover issues along the way rather than waiting for a complete release, at which point multiple bugs may be intertwined and thus harder to detect and address. This also allows you to quickly respond to in-game bugs and new requirements. There is also an important security component here: DevOps teams can rapidly release bug fixes in reaction to newly exposed security vulnerabilities and provide instant protection for applications and web properties.

Servers that scale instantly and to any size
As already mentioned, data processing occurs “locally” when leveraging edge computing infrastructures, significantly minimizing the transmission time for requests and data traveling to  the origin and back. The lower latency results in faster load times, smoother interactions, and with that, a more seamless user experience. Moreover, this architecture allows applications to capitalize on closer proximity to data also residing at the edge, enabling essentially real-time processing of information. 

Additionally, executing applications at the edge facilitates better scalability and enables the distribution of computing resources across a network of edge devices to allow for horizontal scalability to meet fluctuating demand. In short: an infrastructure built around edge computing could offer a way to handle increased workload more efficiently. The distributed approach could be a great way to minimize bandwidth usage by processing data locally and transmitting only essential information to centralized (game) servers, resulting in egress cost savings and improved network efficiency. Overall, executing applications at the edge not only enhances performance but also provides developers with greater flexibility, scalability, and efficiency in managing and deploying their applications.

Conclusion and additional resources

This report addresses big challenges, and every organization is going to encounter them a bit differently, but the fundamental benefits from leveraging the edge cloud are flexible enough to deliver value in just about any situation. If you want help navigating how that might look for you, get in touch with us today, or check out some of the related resources below.

The DevOps Roadmap for Security

DevOps is a movement that enables collaboration throughout the entire software delivery lifecycle by uniting two teams: development and operations. The benefits of DevOps can extend to security by embracing modern secure DevOps practices.

The Modern Application Development Playbook

The three biggest challenges in modern app development are scaling, performance, and optimization. Learn how serverless edge can make your organization faster and safer while removing DevOps headaches and saving you money.

AppSec guide to multi-layer security

The guide covers 8 tactics for a unified approach to AppSec that builds more security into your CI/CD workflows, reduces maintenance, shrinks your attack surface, and saves money.

Guide to the Modern CDN

Traditional CDNs may be stifling your online experience, download the Guide to Modern CDN ebook to understand the importance of control and how a modern CDN puts you back in charge.

Meet a more powerful global network.

Our network is all about greater efficiency. With our strategically placed points of presence (POPs), you can scale on-demand and deliver seamlessly during major events and traffic spikes. Get the peace of mind that comes with truly reliable performance — wherever users may be browsing, watching, shopping, or doing business.

313 Tbps

Edge network capacity1

150 ms

Mean purge time2

>1.8 trillion

Daily requests served4

~90% of customers

Run Next-Gen WAF in blocking mode3

As of September 30, 2023

As of December 31, 2019

As of March 31, 2021

As of July 31, 2023

Support plans

Fastly offers several support plans to meet your needs: standard, gold and enterprise.


Free of charge and available as soon as you sign up with Fastly.


Proactive alerts for high-impact events, expedited 24/7 incident response times, and a 100% uptime Service Level Agreement (SLA) guarantee.


Gives you the added benefits of emergency escalation for support cases and 24/7 responses for inquiries (not just incidents).

Ready to get started?

Get in touch.