Agentic AI
,
Artificial Intelligence & Machine Learning
,
Governance & Risk Management
Platform Vendors Target Runtime Defense, Prompt Flow, Agent Identity and Output
•
October 8, 2025

Artificial intelligence security acquisitions have skyrocketed in recent months as major security vendors look to establish dominance in protecting AI systems, applications and workflows.
See Also: When Identity Protection Fails: Rethinking Resilience for a Modern Threat Landscape
The first deal occurred not even a year ago, with Cisco buying Robust Intelligence in September 2024 for a reported $400 million to boost the security of AI apps and infrastructure. Since then, an additional 11 AI security acquisitions have been consummated, with Cato Networks, Check Point, CrowdStrike, F5 and SentinelOne agreeing in September to fork over a combined $1.31 billion to gain an edge in AI security (see: Check Point Adds AI Application Defense With Lakera Purchase).
This wave of acquisitions validates the massive quantities of money flowing into early-stage AI startups since public interest was piqued by the groundbreaking advances of ChatGPT 3.5 in November 2022. Now, AI security encompasses everything from runtime protections and prompt injection defenses to autonomous agent control, data governance and regulatory compliance – with vendors focusing on each component.
The size of these deals has varied greatly from as little as a reported $20 million for retrieval-augmented generation startup Revrod to as much as $634.5 million for Protect AI, which offers AI scanning, large language model security and generative AI red-teaming. The pace of deals reflects both the maturity of GenAI in enterprise settings as well as the immaturity of security tools available to control it (see: Torq Acquires Startup Revrod to Enhance AI SOC Capabilities).
While 2023 was focused on securing LLMs such as ChatGPT or Claude, 2024 and 2025 introduced security challenges with LLM-powered bots and co-pilots that can autonomously make decisions, take action and interact with other agents, systems, or data. Other AI security acquisitions have focused on helping enterprises see where AI is being used, what data it touches and how policies are being enforced.
But some AI security transactions have focused on identity and trust – not just for users, but for agents and autonomous systems, attempting to answer questions like, ‘Who owns the agent? What permissions does it have?’ And ‘Can it be hijacked?’ Buyers have also moved fast to integrate AI security into their existing infrastructure to create a unified security fabric in which AI is just another policy surface.
Prompt Injection Defense and Agentic AI Threats
GenAI models are trained to be helpful, context-aware and improvisational, which makes them steerable if not properly guarded. Unlike traditional code that behaves deterministically, LLMs respond to cleverly crafted inputs, malicious instructions or biased output. Techniques such as prompt injection, data poisoning and jailbreaking are inherent risks in model-based reasoning – as well as AI hallucinations.
Cisco’s acquisition of Robust Intelligence squarely targets these risks by embedding automated model testing, runtime validation and compliance checks into the AI pipeline. Prompt defense has also been a cornerstone of the SentinelOne-Prompt, CrowdStrike-Pangea and F5-CalypsoAI deals, offering solutions that intercept malicious prompts in real time, score prompt risk and inject guardrails into all interaction (see: Strengthening AI Security With Platform Strategy).
“For the first time, models have to be looked at both ways,” Palo Alto Networks chairman and CEO Nikesh Arora told Information Security Media Group in May. “They could generate malicious code. They could be poisoned from a training perspective and start giving you the wrong answers. They could be agents or activities that models do which could be hijacked by bad actors.”
The industry is rapidly moving beyond LLMs as tools to AI-based agents as actors capable of generating text, making decisions, calling APIs, interacting with other agents and executing tasks autonomously. This shift turns traditional security models upside down since an agent might create its own sub processes, spawn others, invoke third-party APIs or interpret vague commands without direct user oversight.
Snyk’s purchase of Invariant Labs focuses on the MCP protocol, which governs how agents communicate with models and each other. F5’s CalypsoAI acquisition allows for red-teaming against agentic workflows to uncover emergent threats. And through its Aim Security acquisition, Cato Networks positioned agent monitoring directly in the network fabric, enabling traffic-based behavioral analysis (see: Agentic AI Security Gets Fuel With Snyk’s Invariant Labs Buy).
“Every agent is going to have to go through that process of learning, creating trust and then being trusted to act on behalf of whoever is the owner of the agent,” Arora said. “Now, in that context, imagine the security implications if I could take over your agent. I could sow chaos depending on the credentials or the permission of that agent.”
AI Runtime Observability and Data Leakage Prevention
One of the most significant operational gaps in AI adoption is the lack of runtime observability, with organizations struggling to know what data a model is ingesting or what it’s producing. Observability answers these questions by providing a live view of AI behavior across prompts, responses and system interactions, and it is a precursor to regulating or securing AI systems.
Coralogix’s acquisition of Aporia addressed this need directly by combining infrastructure observability with AI runtime insights. And through their buys of Apex Security and Prompt Security, respectively, Tenable and SentinelOne layered on user behavior observability, helping to detect accidental or malicious misuse by employees or applications (see: Tenable Bolsters AI Controls With Apex Security Acquisition).
“Most of the CISOs we talked to say, ‘Look, you got to give me the tools to enforce policy and add guardrails,'” Tenable Chief Product Officer Eric Doerr told ISMG in May. “It’s fine to detect if something bad is happening, but that can’t be 100 times a day. And so Apex has done a lot of that in a really good package.”
One of the biggest risks of GenAI in the enterprise is data leakage, with workers inadvertently pasting confidential information into a chatbot, models regurgitating sensitive data it was exposed to during training, or adversaries crafting prompts to extract private information through jailbreaking. Allowing AI access without control is equivalent to opening an unsecured API to your crown jewels.
Tenable’s acquisition of Apex directly tackles this issue by focusing on user intent, detecting whether AI misuse is accidental or malicious. And as part of SentinelOne, Prompt Security offers fine-grained DLP controls, capturing every prompt and applying tokenization, redaction or blocking in real time. AI tools must be subject to the same policies, controls and audit mechanisms as any other data system (see: SentinelOne Buys Observo AI for $225M to Fuel Data Ingestion).
“Think of this as the classic IT user problem,” Doerr said. “ChatGPT gets turned on, great. What are they doing? Where are employees? Are they putting confidential data in there or not? And there’s some controls that get built in by the chat providers. But it’s not enough. It’s not what people need. So, that’s what led us to it. And we thought that the Apex team was a terrific team they put together.”
Bi-Directional Inspection and Identity-Centric AI Security
Output is just as risky as input with GenAI since an LLM could generate sensitive content, malicious code or incorrect results that are trusted by downstream systems or users. Palo Alto Networks’ Arora noted the need for bi-directional inspection to watch not only what goes into large language models, but also what comes out.
Cisco’s acquisition of Robust Intelligence supports behavioral adaptation by enforcing continuous validation during the model lifecycle. Enterprises need persistent, adaptive defenses that treat AI like a living system, one that requires constant scanning, red-teaming and reevaluation (see: Cisco Bolsters AI Security by Buying Robust Intelligence).
“In the past, you’ve had inspection one way – data coming into the enterprise,” Arora told ISMG in May. “You don’t really inspect data going out, because it’s data that you’ve created through your SaaS applications or private applications in the company. For the first time, models have to be looked at both ways.”
Another key challenge is defining identity in a non-human context, raising questions around how AI agents should be authenticated, what permissions AI agents should have and how to prevent escalation or impersonation. Enterprises must treat bots, copilots, model endpoints and LLM-backed workflows as identity-bearing entities that log in, take action, make decisions and access sensitive data.
CrowdStrike’s acquisition of Pangea heavily emphasized AI identity and role enforcement, ensuring that only trusted agents can access specific models or datasets. Cato Networks similarly enables traffic-level enforcement through its acquisition of Aim Security by observing and regulating which agents or models are communicating with which systems. Identity threat detection frameworks must cover AI entities (see: CrowdStrike Buys Pangea for $260M to Guard Enterprise AI Use).
“The core infrastructure layer is really where a lot of the componentry comes into play,” CrowdStrike Chief Business Officer Daniel Bernard told ISMG in September. “How do you secure the data center? How do you secure the GPUs? How do you secure data in transit, data at rest? And then, ultimately, how do you secure the cloud?”
Posture Management, Red-Teaming and Protocol Inspection
As organizations build or fine-tune AI models, they should be able to detect vulnerable open-source LLM packages, scan model weights for malicious embedding, assess the provenance of training data and test fine-tuned models for bias or risk. Palo Alto Networks, Snyk and Cato all shift security into the development process, with Snyk’s AI Bill of Materials enabling dev teams to track model and agent dependencies.
Red-teaming has also become essential in AI security since AI systems are prone to new classes of exploits that may not be covered by traditional scanners. By purchasing CalypsoAI and Protect AI, F5 and Palo Alto Networks simulate real-world attacks against models and agents, uncovering vulnerabilities before bad actors do. Red-teaming is a key component of maintaining trust, reliability and security (see: F5 Targets AI Model Misuse With Proposed CalypsoAI Purchase).
“You’re typically used to doing offline red-teaming against infrastructure, because once you deploy the infrastructure, it doesn’t change,” Arora told ISMG in May. “But in the AI case, when it adapts and changes, you got to start looking at almost like a persistent red-teaming over time. Because the behavior could change on some periodic basis.”
AI workloads run on highly specialized infrastructure including GPU clusters, ML pipelines, orchestration systems and new protocols like MCP, which are increasingly complex and often invisible to traditional security tools. And if you don’t secure the hardware, protocol and orchestration layers, you’re leaving AI unguarded at the foundation.
Snyk leads the charge in inspecting AI-specific code and protocol risks through its buy of Invariant Labs, including inter-agent message flows. Cisco and Palo Alto aim to protect the infrastructure layer, ensuring that LLMs and inference engines are securely deployed across hybrid environments. And Cato Networks brings AI security to the network layer, inspecting and influencing agentic communication in real time (see: Cato Networks Acquires Aim Security for AI Threat Protection).
“AI is not just another layer of security, it’s a whole new stack that is completely different than previous stacks,” Cato Networks co-founder and CEO Shlomo Kramer told ISMG in September. “It’s listening to thousands of conversations in English and deciding what is okay for the enterprise and what is not. Both the capabilities and the rules of reaching that decision are completely new.”