Autonomous AI agents have evolved from text generators into active computational actors, introducing novel failure modes that traditional security frameworks cannot detect or govern. Reasoning drift, self-escalation of privileges, and emergent tool misuse demand a new approach to AI security.
Discover how the agent kill chain framework addresses this by providing the first structured behavioral model for the lifecycle of agentic AI misuse. Download this white paper to understand how the Agent Kill Chain provides security teams with the shared language and defenses needed to move from reactive prompt guards to comprehensive behavioral governance.
Artificial intelligence is crucial in security intelligence because it enhances threat detection, automates response actions and enables predictive analysis of potential threats. AI algorithms can analyze large volumes of data to identify patterns and anomalies, helping security teams detect and respond to cyber threats more efficiently. Additionally, AI technologies can aid in identifying vulnerabilities, predicting security risks and providing actionable intelligence to improve overall cybersecurity posture.
Agent interaction with customer data varies by capability.
Mobot (including Query Agent and Knowledge Agent) and Summary Agent do NOT process or analyze customer data.
The SOC Analyst Agent (in preview as of February 2026 with certain chosen customers) processes customer data in order to help review insight data, correlate activity, and assist in triage and investigation as directed by the user.
Any AI capability that processes customer data:
Customers retain control over whether these data-processing capabilities are enabled in their environment.