ブログに戻る

フォロー&ご登録

英語のみで利用可能

このページは現在英語でのみ閲覧可能です。ご不便をおかけして申し訳ございませんが、しばらくしてからこのページに戻ってください。

In AI We Trust? Increasing AI Adoption in AppSec Despite Limited Oversight

Natalie Lightner

Senior Content Marketing Manager

“AI solutions offer an opportunity to reduce so much of the manual toil within application security, vulnerability management, and security operations. Despite some progress in automating workflows, security teams today are still overwhelmed by the volume of what they are dealing with. The survey shows that security leaders are taking advantage of this new technology to get ahead of that problem and to find and remediate security risks quickly. The trick now will be decreasing the false positive rate for AI-driven security signals and effectively integrating these solutions into workflows such that they do not become AI shelfware." - Marshall Irwin, CISO, Fastly

The rapid integration of artificial intelligence (AI) into application security (AppSec) has been lauded as a game-changer, promising to alleviate the overwhelming manual efforts and accelerate vulnerability detection. With expanding attack surfaces, limited resourcing, and pressure to ship more code, fast (but still securely), we hypothesized that AI could help to fill that gap. Indeed, our latest survey reveals a striking trend: a staggering 90% of respondents are already leveraging or actively considering AI within their AppSec programs; spread across regions and industries, 77% of respondents are already using AI, and 13% are evaluating AI to at least some extent within their AppSec programs and workflows.

Yet, beneath this enthusiastic adoption lies a critical, and perhaps concerning, paradox: despite this heavy reliance on AI, respondents report little to no oversight of AI results. A third of respondents reported that 50% or more of the AppSec issues identified by AI tooling in their workflows are acted upon without human review. So is this an indicator of trust or a symptom of teams taking calculated risks in the name of keeping pace?  

A graph showing the percentage breakdown of those using AI or ML in AppSec workflows

AI Adoption Trends: A Global and Industry Deep Dive

77% of total survey respondents reported already using AI within their existing AppSec workflows, with the High Tech industry coming in highest (88% are using AI in AppSec use cases). SaaS (86%) and Healthcare (82%) were close behind, with M&E (73%) and Public Sector (64%) slightly lagging in AI adoption. South America saw the highest reliance on AI at 90%, with Asia (82%), Europe (80%) slightly behind. North America clocked in a bit behind at 75%. This seems to indicate potential resourcing or talent differences, with North America relying less on AI - potentially due to a saturated security and tech talent pool to draw from?

When asked about integration into existing CI/CD pipelines and to what extent AI-driven security tooling is in place, 25% of survey respondents reported that AI is fully integrated into their existing development pipelines. 39% reported that it is partially integrated, 31% reported that they are ‘experimenting’ with implementation, and only 6% reported, it is ‘not at all integrated’ into existing workflows at this time. The same regional trends revealed by the previous question revealed themselves here too: 38% of South American respondents reported that AI was fully integrated into their existing pipelines. That number was only 25% for North America. By Industry, High Tech reported the most complete integration (40%) compared to other industries: M&E (19% fully integrated), Gaming (15% fully integrated).

A pie chart showing the breakdown of how integrated are AI tools in your CD/CI pipeline

The benefits of AI to respondents are clear: 55% report an (obvious) reduction in manual effort, 50% report ‘faster vulnerability detection’, 36% report faster vulnerability remediation timelines, and 43% noted better triage capabilities. But are these benefits at the expense of accuracy and true security?

Trust and Accuracy: Evaluating AI's Reliability in AppSec

In light of heavy adoption and integration numbers, we wanted to further understand respondents’ sentiments around AI’s reliability and trustworthiness. We asked respondents about their reported prevalence of false positives stemming from AI-driven security tooling. 37% reported occasional false positives, and 12% reported frequent false positives, for a total of nearly half (49%) of survey respondents who see at least somewhat frequent false positive results - a finding which could pose significant negative impacts to any security program. Only 11% reported that they ‘never’ see false positives. This begs the question of whether AI-driven security tooling is really yielding ‘good enough’ security results.

We dug further, questioning overall trust in AI’s accuracy. Only 22% ranked it as ‘excellent’, while 48% said it was ‘good enough’, and a combined 30% said it was either fair or as far down as ‘very poor’. We wanted to explore further what challenges security teams are seeing while using AI in their security workflows. Able to select various answers, respondents reported that ‘integration complexity’ (46%), Lack of trust in results (36%), Poor explanation of security findings (23%), internal skills gaps (38%), and regulatory or compliance concerns (33%) are giving security teams pause.

In free-form responses, respondents reported that they “have too much debugging [they] have to do afterward”, and that they “have ethical and compliance concerns” around AI usage in their security workflows.

The Critical Gap: AppSec Issues Acted Upon Without Human Review

A clear trend emerges when reviewing adoption/integration numbers with respondents’ responses around AI oversight, trust, and results: AI is integrated, it’s helping to speed things up, and it’s helping to fill the gap where resources and skills lack - but it‘s certainly not perfect. With false positives and differing sentiments towards its trustworthiness and overall accuracy, we wanted to understand what, if any, guardrails organizations have in place to verify security results.

We asked respondents what percentage of errors found using AI-driven security tooling are then acted upon WITHOUT human review. 4% of respondents reported that between 76 - 100% of identified errors are acted upon without human intervention. 26% of respondents reported that 51-75% of identified errors are acted upon without human intervention. And another 26 % of respondents reported that between 1-25% of identified errors are acted upon without human intervention. 

This is all to say that a third of respondents report that 50% or more of the AppSec issues identified by AI-driven tooling in their workflows are acted upon without human review of any kind. Given the mixed sentiments provided above about AI’s overall accuracy and performance, it’s safe to assume that the lack of oversight here is a mixture of limited resources and bandwidth, paired with risk tolerances high enough to accept that AI is “good enough”.

For those orgs that DO practice some level of AI oversight, we asked what governance controls they have in place to verify results. Capable of selecting more than one answer, 66% reported review checkpoints, 49% use AI model vetting, 46% use auditing and logging, and 32% rely on secure sandboxing. While it’s promising to see some level of oversight, these values should again be viewed in tandem with the responses above: While there are some decent oversight practices in place, the percentage of respondents who practice them is concerningly limited.

The Future of AI in AppSec: Potential for More (Better) AI

With an eye to the future, we asked respondents what they hope to see or need to see out of AI in AppSec-based use cases. In response to which improvements or capabilities respondents want to see from AI in Appsec, 32% reported that they were unsure or that it was doing an ok job for now. 18% wish that it was ‘more accurate with fewer false positives’, 10% reported the need for better AppSec-specific capabilities in general, and a combined 9% want to see faster results with automated real-time vulnerability detection, during coding. 

Some key highlights of this open-ended question included a respondent “[wanting] AI to be more transparent and communicate immediate threat detections with higher accuracy”. Another “[wished] AI tools could better understand complicated business context so they could better prioritize vulnerabilities.” And another stated that “AI needs the ability to precisely differentiate between legitimate and malicious activities while explaining rationale behind its decisions”.

We asked respondents if they are exploring new or future use cases for AI within their AppSec programs. 49% reported that they are, though informally. 31% reported yes, with a formal roadmap. A combined 20% are either not exploring yet or are interested in doing so soon. This means a combined 80% are exploring new or additional use cases for AI in the future. Respondents reported “ [wanting] to try out AI-driven threat modeling and automated code review to help make security more accurate and efficient, and “[ looking] to do automated remediation, predictive modeling, and AI code review”.

Survey Methodology

This survey was conducted by Fastly from August 20 to September 3, 2025, with 1015 professionals.  All respondents confirmed that they are responsible for influencing or making application security purchasing and strategy decisions as part of their roles. The survey was distributed across North America, South America, Europe, the Middle East, and Asia. Results were quality-controlled for accuracy, though, as with all self-reported data, some bias is possible.