Volver al blog

Síguenos y suscríbete

Sólo disponible en inglés

Por el momento, esta página solo está disponible en inglés. Lamentamos las molestias. Vuelva a visitar esta página más tarde.

CISO Perspective: Q2 2025 Threat Insights Report

Marshall Erwin

Chief Information Security Officer

Fastly’s Security Research Team has unique insight into security trends, attack vectors, and threat activity across the application security landscape. Drawing from trillions of requests across our global customer base, we can gather a real-time view into what’s materially impacting security teams in the context of larger trends. 

Each quarter, we summarize and present our key findings, with perspectives on what it all means for the broader market – and for you. The aim is to provide readers with insights to inform their own security program strategies, priorities, and practices. 

With the recent drop of our Q2 Threat Insights Report, I’ve set out to provide my perspective on the findings, and in particular, how they impact business from the broader lens of my role as CISO. Below are my takes on the Q2 findings…

The bots are coming for us

The overwhelming theme from our Q2 report – and the reason we made the entire report about them - was bots. Bots are a rapidly growing category of automated web traffic, and they are already, and will continue to, reshape how content is accessed, scraped, and monetized across the internet. 

Understanding the growing impact of AI-driven automation and all of its implications is critical for organizations wishing to remain competitive and, most importantly, secure. Insights into behavior and sufficient visibility and security capabilities are essential for navigating the rapid changes happening across the internet. 

To this point, of the total observed traffic activity across Fastly’s global network, we found that 37% was automated bot traffic, highlighting the importance for organizations to consider tooling and strategies to mitigate unwanted outcomes.

The percentage of web traffic made up of AI bots will undoubtedly continue to grow as data-hungry LLMs continue to crawl the web, existing AI solutions see increased adoption, and new agentic tools increasingly mediate the experience between consumers and websites. The implications for website owners are unfavorable if they don’t have an adequate understanding of bot traffic and behavior, and if appropriate measures are not in place to counteract negative impacts.  

The explosion of AI bots will put strain on websites, decrease the effectiveness of site analytics, and create a security risk by allowing attackers to hide in large volumes of automated traffic.

Visibility is King

While not all bot traffic is bad - think uptime monitoring and search engine crawling - website owners face a real challenge in defining ‘good’ versus ‘bad’ web traffic, and filtering accordingly. 

We’ve found that about 87% of bot traffic is malicious, meaning opportunities for account takeovers, ad fraud, carding, and a multitude of other security concerns. And AI bots add another layer of complexity - though not necessarily malicious at face value, AI fetches content (without permission) in order to train models and enrich model responses. Site owners may view this activity as either beneficial or risky to the overall health of their website.

As AI crawling and fetching continue to grow, it will be increasingly difficult to distinguish between human traffic, wanted bot traffic, and unwanted bot traffic. Additional nuance will arise when further filtering for automated traffic that is actively malicious and facilitating ATO or other web-based attacks. 

Intelligent traffic visibility is therefore absolutely critical in the current climate. 

Bots are avoiding detection

As I noted earlier, there are web performance implications from AI bots - AI crawlers and fetchers yield degraded performance and increased infrastructure costs. And we’re seeing bots actively avoiding detection. 

Our Q2 report saw indicators of some AI companies ‘disguising’ bots in order to bypass standard bot detection. We noted some bots “hiding their identity deliberately by using User-Agent strings of regular web browsers”. This deception often allows bots that web providers would like to see blocked, slip by. Alternatively, it can also result in ‘desirable AI bots [being] classified as malicious bots and subsequently blocked by website defenses’. 

As this deceptive behavior escalates, we will be in jeopardy of creating a sort of ‘cat and mouse’ game between AI crawlers and bot management providers. Organizations should ensure they have bot management solutions in place that can actively enforce a website provider’s determined level of acceptance for bot traffic.

What should you do about bots?

Get a GREAT bot management tool 

It will be increasingly important for website owners to proactively implement solutions that give them granular visibility and control over bot traffic. Tools like Web Application Firewalls (WAF) and Bot Management solutions will no longer be optional security solutions based on a company’s risk profile. They will become business-critical solutions.

Advanced bot management solutions like Fastly Bot Management give website owners fine-grained control over which AI bots are allowed, how frequently they can access the site, and which content they can reach. These tools dynamically identify the growing number of AI bots in real-time and provide comprehensive visibility into bot activity to help monitor and manage access effectively.

Your biggest takeaway from our findings should be the absolute necessity to adopt a bot management solution capable of handling traditional and AI bot traffic. Failure to do so will inevitably result in lost time, revenue, and resources - and perhaps even more concerning, security implications. 

To get more in-depth insights into bot activity, read our Q2 report here.