The truth is, most web app and API security tools were designed for a very different era. A time before developers and security practitioners worked together to ship secure software using integrated workflows. A time before applications were globally distributed and API-based. A time before engineers expected to be able to enter a command and instantly make a global update. But as our CEO Joshua Bixby likes to say, “Attackers are developers, too.” And attackers aren’t bogged down by the limitations of legacy solutions. They’re as nimble as ever, using modern tools and workflows to build and advance new threats. It’s never been more clear that it’s time for a change. So today, we’re outlining new rules for web application and API security that respect the way modern applications are built.
Rule 1: Tools must fight intent, not specific threats
Security teams have long been focused on fighting specific threats. When evaluating new tools, they ask, “Can this protect me against a X?” Those threats are often big news-making events like Stuxnet, or the more recent SolarWinds hack. This style of evaluation leads practitioners to tools that look for signatures or “Indicators of Compromise” (IoCs) of a particular threat. IoCs include things like the IP addresses of known attackers and regular expressions that match a particular request URL that the malware targets.
What signature-based tools don’t do well is differentiate between legitimate and malicious traffic or keep up with the unyielding increase in threats. And how could they? Recent reports indicate over 350,000 new malware variants per day are created.
We know this model isn’t working. We’ve seen it fail with anti-virus vendors that couldn’t protect against compromises, legacy WAF providers that look only for SQL injections or cross-site scripting, and bot mitigation tools that look only at the user agent of requesting browsers.
The new rules of web application and API security demand a shift toward a more intelligent model, one that infuses enough confidence into the security toolchain that a practitioner can assuredly run the system in front of valuable traffic without fear that it will block legitimate attempts or allow malicious ones through.
Getting there requires making new demands from security technology. First, practitioners must demand tools that examine not just the signature of the traffic but its intent or behavior. This means taking into account factors like the speed of the request, time of day, and user log-in status.
Second, builders must demand that tools can be run not just in monitoring mode but in blocking mode. Tools that can only run in monitoring mode for fear of false positives reinforce a broken system: the damage is done by the time the team can respond. Imagine a superhero that stands on a street corner yelling out crimes (jaywalking! burglary!) and then waits for a law enforcement response rather than rolling her sleeves up and taking action in the moment. Security and operations teams are drowning in alerts. Though there will always be a need for Detection and Response (which we’ll get into later when we address visibility and control), teams need a foundation of tooling that can confidently block threats as they happen, not diagnose the problem after the breach.
Lastly, tooling needs to keep up with modern threats without placing a burden on the security and operations teams. With modern cloud and SaaS solutions, you get the full weight of a product security team staying ahead of threats and proactively delivering updates. There’s no need to worry about patching or obsessing about the latest threats.
Rule 2: There is no security without usability
The rise of intuitive consumer-like web experiences in SaaS-based tooling has made usability near table-stakes for most technology tooling. And yet security solutions lag woefully behind. Legacy tools were designed to enforce and control — not to actually be operated. But modern teams expect a relationship with their tools. They need the ability to integrate, observe, and take action.
The user interface is the first line of defense for an operator. Unfortunately, it’s also the first place usability is neglected. Legacy UIs can be slow and clunky. And operators often need to log into multiple user interfaces to manage the system, even when using solutions from the same provider. A poor UI creates a multitude of risks: gaps in policy and enforcement across tooling, slow and uncoordinated response to urgent threats, and inconsistent — or worse, absent — visibility into the holistic security ecosystem.
A security solution should have a single, intuitive, easy-to-use interface that allows control and visibility of the entire solution. Observability should be all-encompassing and integrated to provide full visibility into the state of the system at a glance. And importantly, these solutions should be usable for security and non-security teams because more people across ops and engineering are empowered in the fight against threats.
Next, modern tools must match modern application design. Too often, toolsets are simply packaged and sold together by a provider, but not actually capable of technical integration. A friend of mine calls this “integration by invoice.” Even if your system simply forces you to switch between tabs to navigate your solutions, you're still losing valuable seconds and integrated visibility while you're under attack. This approach weakens your overall security posture by creating technical and visibility gaps. Providers should expect automation and integration by default, which starts with full API control. All security solutions should have easy-to-use APIs that expose all of the functionality of the system. Solutions should easily integrate not just with each other, but with the entire response toolchain, including tools like Jira, PagerDuty, Slack, and Splunk. And they should offer real-time logs and stats that expose the data in whatever security monitoring or observability system the team uses. Integrating all of your solutions makes it far easier to determine the true intent of the attacker.
Rule 3: Real-time attacks require real-time reactions
Malware is a software system. Software systems are built by developers. Thus, attackers are developers. Realizing this makes it far easier to understand why threats are outpacing legacy security models. Agile attackers are employing advanced DevOps workflows to quickly attempt, adjust, and deploy new methods. It can happen hundreds of times during a single attack. How can you possibly protect your applications if you can’t react with the same speed? To be clear: the reaction time isn’t limited by how quickly your brain works. It’s dependent on the speed of your security solutions. Let’s break that down into speed of visibility and speed of control.
Speed of visibility
If it takes minutes or hours to spot an attack it’s already too late. Attackers try an attempt, if it fails, they try another. All of this happens in the span of minutes. By the time your security solution is able to relay the data to itself or a SIEM or monitoring system, the damage has been done. Real-time visibility (measured in seconds) serves both the automated and manual workflows. It allows the system to apply logic in real-time for threat examination, and it provides operators the ability to react to alerts that require human intervention. In both cases, visibility must go hand-in-hand with control.
Speed of control
Once you or your system can see the threat, speed of response or remediation is critical. The more intelligent, intent-based approach to mitigation requires multiple streams of data to make a decision. Intent-based systems operate as self-learning and self-healing systems. They are constantly analyzing patterns and behaviors to predict new or evolving threats, so it’s imperative they can not only see and interpret traffic in real time, but also have the power to deploy new rules in response to changing threats.
It’s worth noting that speed of visibility and control cannot be constrained to a single deployment or geography. The prevalence of distributed systems (either multi-region and multi-cloud deployments, or a distributed workforce that needs to be protected) requires the ability to take action across physical locations or boundaries. We have seen some security solutions that are fast as long as detection and protection is done in only a single location. That isn’t how the world operates anymore — software and people are deployed across the globe and security must keep up.
Rule 4: Dev, sec, or ops, everyone must think like an engineer
Together, we’ve watched the evolution from siloed teams working independently (and painfully) to ship software to the unification of security, engineering, and operations through the secure DevOps model. But we are far from hanging out a mission accomplished banner. While many security and engineering leaders believe bridging the technical and cultural divide between teams will result in faster, more secure application development cycles, antiquated practices and tooling still hold teams back.
One notable constraint is when secure DevOps is more performance art than authentic integration. Bolting security operators and their preferred tooling onto the end of your deployment pipeline does not mean you’re doing secure DevOps — and it won’t make your software ship faster. True secure DevOps builds security verification and vulnerability scanning directly into the automated testing and deployment framework. It provides a path for security teams to show up as an integrated part of the development team — not a gate brought in at the last minute to submit a list of vulnerabilities and hope they get fixed before the system goes live. In turn, developers write code using secure development practices and automated CI/CD pipelines test not just for functionality but for common security holes. Finally, the security team has the skills and authority to employ in-depth security audits in case the automated system misses anything.
In order for secure DevOps to live up to its promise, security practitioners, operations professionals, and developers must all adopt an engineering mindset with a focus on shipping secure software. Let’s eliminate the toil of managing security products and capabilities so that security professionals can shift from operators to engineers. It not only makes the overall system more reliable and secure, it gives employees a more fulfilling career path.
Better security is integral to building better software
It’s been fifteen years since Amazon launched AWS and kickstarted our migration to the cloud. In that time span, we’ve seen (and many of us have adopted!) hundreds of new frameworks, languages, services, and tools to build faster and more user-centric applications. And honestly, it’s been a lot of fun. But the friction between shipping software quickly and securely remains a sticking point for reasons that we can actually solve today.
The path to reducing that friction must include security solutions that meet the needs of modern teams — ones that include security as an integral part of the cultural and technical aspects of building software. It’s not enough to ship software quickly. We must ship high-quality software securely. For our part, we’ll be focused on building web application and API security solutions that live up to the rules we outlined today. We’re in this together.