Building a defense-in-depth security strategy for web applications


Cyber attacks are on the rise, and governments are urging both citizens and organizations to review and enhance their security posture. Naturally, this leads many customers to assess what measures they can put in place today to ensure the best web application protection. While there’s no magic answer to stop all attacks, there are a number of best practices used in a defense-in-depth strategy that can limit their impact.

In this blog post, I’ll outline steps you can take to protect your applications on many levels, but the first step is understanding how to limit what’s actually available to be attacked in the first place. 

Inventory your attack surfaces and what’s in place from vendors

It’s useful to start by thinking about the multiple steps that happen during a typical request for a client to send and receive information with a web application. For example, most people consider the attack surface of their home as the front door, but they should also consider the windows, the driveway, the water line, and the telephone pole on the street. Likewise, any component that could prevent an application to serve the intended users can be the target.

Looking at those components of a web request sequentially, the DNS phase must occur to resolve the domain name to an IP address. The IP is then connected to, and an HTTP request is sent. Commonly, this IP is bound to a reverse proxy such as a CDN or load balancer. Finally, the reverse proxy must make a request to an origin network that houses the original version of the content. These are the things that we need to protect.

Through all three of these phases, the full OSI (Open Systems Interconnect) or TCP/IP model of connection must occur in order to establish a data link, a connection, a session, and then return a fully formed request. Simplified, you can think of it like this:

web app connection phases

From a security perspective, all of these points are critical functions that can be attacked. Having protections at each place and being aware of what protections your vendor has in place on your behalf is critical to staying available. 

For example, by using distributed DNS, a globally distributed CDN to cache content, rate limiting to enforce speed limits with varying identifiers to make circumvention harder, and a robust WAF in blocking mode, applications can be protected in the face of attack. We like to think of these protections much like a funnel, with layers of protection aligning to a defense-in-depth infrastructure security strategy like below. In the rest of this post, I’ll walk you through each segment and protection in detail.

DDoS Layers Fastly transparent

Protect at every layer 

Infrastructure-layer protections

For the vast majority of modern internet applications today, companies have chosen to outsource all three of the web request phases we discussed above to cloud providers, rather than operate a network directly. However, as DNS is a common area of attack and the first phase of a web request to occur, it’s a good idea to consider using multiple DNS providers where available to ensure maximum availability. We also encourage our customers to put a reverse proxy, such as a CDN, in front of as much traffic as possible and lock down the origin to only receive traffic from select IPs — such as those proxies — to avoid attacks directly to the origin network. Lastly, the origin itself could be in a cloud provider or multiple providers, load balanced by the CDN in front of it. This puts the primary responsibility to protect the origin network on the vendor, rather than your own team.

Once these measures are in place, an individual website or application operator can cross off the lower-layer types of attacks, such as UDP amplification or TCP SYN floods, from the list of things they need to directly mitigate, thanks to vendor coverage. For example, we have a large global network with large amounts of bandwidth, mitigation capabilities, and a network group constantly monitoring our infrastructure to immediately react to attacks such as these on our customers' behalf.

Customers who operate their own networks often have scrubbing capabilities from their ISP or others, who can divert an IP from BGP, scrub the malicious traffic, and deliver back to the origin network. This service can be used to enforce the above access control lists even in the face of overwhelming traffic.

Application-layer protections

With the core infrastructure now protected, operators can begin focusing on “in protocol” or the OSI model Layer-7 based types of attacks, which means an attack simulating a real user request into the application.

Application attacks can take two different forms. The first is a sort of continuation of the infrastructure attack, where there is a flood of traffic with requests that are fairly similar to typical customer requests — the volume is the issue. The attack’s goal could be to overwhelm the application to the point that it stops processing legitimate requests (i.e. denial of service), or it could be to extract data, money, or inventory.

The second form of attack is a specific targeted request that is itself malicious because the ultimate goal is to take control of an application’s systems. SQL injection (SQLi), Command Execution (CMDEXE), Cross Site Scripting (XSS), attempts to access admin pages, and requests seeking to exploit the Log4j vulnerability are all prime examples of this sort of attack.

Mind the gaps 

With the proliferation of internet-based devices and large scale botnets, it has become easier  than ever to perform an attack flooding a system within protocol attacks. There is also an evolution of maturity toward what those attacks might look like. Here’s how I see that evolution and the ways to protect your applications:

Serve from cache

At the very simplest, someone can make a request for the same resource billions of times in an attempt to exhaust the systems serving it. Simply by ensuring that as many assets as possible are being served from cache can lessen the impact from this form of attack. Passive benefits, like request collapsing and shielding, that enable high cache hit rates and decrease the volume of traffic being sent to origin, can offer a particular advantage if the requests originate from bots instead of real users. Additionally, actively looking to increase cache coverage, such as with ordering and sanitizing request query parameters, can keep as much in cache as possible. Serving traffic from cache is often the least computationally challenging — and most cost effective — way of resolving an in-protocol application attack.

High rates, high risk

The next evolution of attacks is to simply vary the URL to be a random hash, or to change other values of the request so that it can not be cached. This means that a large volume of traffic at a high rate would be sent back to the origin. Collectively, we call these “cache busting HTTP floods.”

Edge rate limiting looks at the overall rate of traffic based on a client identifier. While an attacker can vary IP, user agent, network, etc., multiple policies could be used together to set overall traffic speed limits based on what the typical traffic profile for a site should look like. Such policy guardrails could include allowing up to 100 RPS per IP, 100 RPS per userID, 500 RPS per network (e.g. ASN) etc. The goal is not to eliminate the attack entirely, but to keep the flow of traffic low enough that downstream systems are able to process the volume. 

The low and slow heat

Not every attacker goes for the brute force approach. Sometimes they look to hide their traffic in lower volumes to stand out less, preferring to attack something low and slow while relying on the pain to be felt in backend systems. If an attacker were to find an API able to bulk change records or an authentication request with multiple systems to query, they could rely on the application to amplify their use of force for them. Equally, attackers probing for vulnerable systems want to keep the volume low enough to evade detection altogether. 

For more exact control, application rate limiting provides surgical precision against low-volume attacks. For example, a rule can be easily built to limit IPs to five requests over 1 minute to a list of API URLs. The logic can also be very involved, such as only rate limiting from certain sources, like foreign countries or TOR nodes, or excluding lists of known good request sources. It is usually not a good practice to block all requests from a particular country unless specifically required to by law. Rather, it’s often better to be more strict around volume and acceptance of types of requests from those higher-risk geographies, such as those your organization does not operate in today.

Sweat the small stuff

With the various types of flood traffic removed, the malicious requests left are ones where even an extremely small volume of requests are bad. Many of these types of attacks are defined in the OWASP foundation list of top attacks and can be used as a shorthand for malicious web requests. Additionally, there are individual attacks exploiting common vulnerabilities and exposures (CVE) to take advantage of flaws in applications. In addition to patching and management of critical vulnerabilities, stopping the incoming traffic from engaging the vulnerability itself has historically been the job of a web application firewall (WAF).

With a legacy WAF, it’s advisable to ensure the WAF rules are in a blocking mode, and covering as many assets as possible to maximize protection. However, next-gen WAF solutions can offer the benefit of enhanced language processing to better identify malicious requests and prevent false positives. Additionally, our Next-Gen WAF leverages thresholds that ensure both the attack classification as well as the velocity of the attack are considered before denying requests. This decreases the chance of blocking a “subjectively benign” request such as an employee submitting code into a CMS which happens to flag an attack signal. In combination, this allows WAF to be applied to more assets while providing accurate blocking in production. As mentioned above, rate limiting capabilities enforce traffic boundaries beyond the typical role of a traditional WAF.

Your overall security strategy

It’s worth acknowledging that all the topics I covered in this post, such as leveraging WAFs and CDNs, should exist in a holistic defense-in-depth security strategy. Ensuring role-based access control (RBAC) and multi-factor authentication (MFA) through a central authentication system can ensure proper security of users and mitigate unauthorized access to security systems. Nor is a WAF a substitute for following best practices in vulnerability management and active scanning of changes made on an application. 

Still, the vast majority of traffic traversing the public internet is HTTP(S) based and the methods outlined in this post reflect some of the most common cyber security challenges to web applications today. Engage with your trusted partners to ensure systems are in place to protect your application before an attack strikes. Our technology and support teams are here to help you implement any of these technologies and stay as protected as possible. Reach out to get started or ask questions.

Matt Torrisi
Senior Sales Engineer
Published
Want to continue the conversation?
Schedule time with an expert
Share this post
Matt Torrisi
Senior Sales Engineer

Matt Torrisi is a Senior Sales Engineer at Fastly working on the Account Management team to help bring performance and security to some of the largest brands on the internet. His 10 plus-year career has brought him around the world, preaching defense-in-depth security and resilient architecture for internet infrastructure. When not working, Matt can be found in the kitchen of his farmhouse, cooking dinner in quantities disproportionate to the guest list, likely supervised by his three cats.