Empowered AI in Security Operations Centers: An Answer to SOAR’s Unmet Commitments
The concept of Security Orchestration, Automation, and Response (SOAR) emerged with the commitment to revolutionize Security Operations Centers (SOCs) by automating tasks, streamlining processes, and boosting efficiency. Despite multiple technological advancements over a decade, SOAR hasn’t completely met expectations, leaving SOCs still confronted with similar challenges. Enter Empowered AI—a novel strategy that might finally realize the long-desired vision of SOCs, offering a more dynamic and adaptable solution for automating SOC operations effectively.
The Inadequacies of Three SOAR Generations
SOAR made its debut in the mid-2010s with companies like PhantomCyber, Demisto, and Swimlane, aiming to automate SOC operations, enhance productivity, and shorten response times. However, its major success was in automating routine tasks like threat intel propagation rather than the core threat detection, investigation, and response (TDIR) functions.
The evolution of SOAR can be categorized into three generations:
- Generation 1 (Mid-2010s): Initial SOAR platforms featured rigid playbooks, intricate implementations (often requiring coding), and high maintenance needs. Adoption beyond basic applications, such as phishing analysis, was limited.
- Generation 2 (2018–2020): This phase brought forth no-code, drag-and-drop editors and extensive playbook libraries, reducing the necessity for technical resources and enhancing adoption rates.
- Generation 3 (2022–present): The latest iteration utilizes generative AI (LLMs) to automate playbook development, further easing the technical load.
Despite these enhancements, the core promise of automating SOC operations with SOAR remains unrealized for the reasons we will presently delve into. Each generation primarily focused on improving operational simplicity and reducing the technical complexity of SOAR rather than addressing the fundamental hurdles in SOC automation.
What Hindered SOAR Success?
While contemplating why SOAR fell short in automating SOC functions, it’s essential to realize that SOC work comprises numerous unique activities and tasks specific to each SOC. In general, SOC automation activities linked to alert handling can be categorized into two groups:
- Cognitive tasks – e.g., determining the validity of an incident, understanding the context, assessing the impact, formulating a response plan, etc.
- Operational tasks – e.g., executing response actions, informing stakeholders, updating record systems, etc.
SOAR efficiently manages “operational” tasks but grapples with the “cognitive” tasks. Here’s why:
- Complexity: Cognitive tasks demand profound comprehension, data synthesis, pattern recognition, tool familiarity, expert knowledge, and decision-making. Static playbooks struggle to emulate these attributes.
- Unpredictable Inputs: SOAR depends on predictable inputs for consistent outcomes. In the realm of security, where exceptions are commonplace, playbooks become increasingly intricate to handle exceptional scenarios, leading to high implementation and maintenance demands.
- Customization: Off-the-shelf playbooks seldom deliver as intended. They invariably necessitate customization due to the prior point, resulting in persistent maintenance overhead.
Automating “cognitive tasks” is the key to automating a larger portion of the overall SOC workflow.
Examination: SOCs’ Vulnerable Point
The triage and investigation phases within security operations involve cognitive tasks that precede response efforts. These tasks resist automation, compelling a dependency on manual, sluggish, and non-scalable processes. This manual bottleneck relies on human analysts, hindering SOC automation by:
- Dramatically reducing response times—prolonged decision-making hampers progress.
- Generating impactful productivity gains.
To actualize the original SOC automation promise of SOAR—improving SOC velocity, scope, and efficiency—the focus should shift to automating cognitive tasks in the triage and investigation phases. Successfully automating investigation tasks would simplify security engineering, allowing playbooks to concentrate on corrective actions rather than triage management. It also paves the way for a fully autonomous alert handling mechanism, significantly reducing mean time to respond (MTTR).
The pivotal question is: how can we efficiently automate triage and investigation?
Empowered AI: The Crucial Element in SOC Automation
In recent times, large language models (LLMs) and generative AI have revolutionized multiple domains, including cybersecurity. AI excels at performing “cognitive tasks” in the SOC, like interpreting alerts, conducting analysis, amalgamating data from diverse sources, and drawing conclusions. It can also receive training on security knowledge bases like MITRE ATT&CK, investigation methodologies, and company behavior patterns, replicating the expertise of human analysts.
What Defines Empowered AI?
There has been widespread confusion surrounding AI in SOCs lately, mainly due to prematurely asserted marketing assertions from the 2010s, well before contemporary AI methodologies like LLMs existed. This confusion was further compounded by the 2023 industry-wide scramble to graft an LLM-based chatbot onto prevailing security products.
To clarify, there are at least 3 categories of solutions marketed as “AI for the SOC”. Here’s a comparison of diverse AI implementations:
- Analytics/ML Models: These machine learning models have been in use from the early 2010s and are primarily applied in areas like UEBA and anomaly detection. Despite being tagged as AI by marketers, they don’t conform to present-day advanced AI standards. This constitutes a detection technology.
- Analytics solutions can enhance threat detection rates but often generate numerous alerts, many of which are false positives. This adds to the burden on SOC teams as analysts need to sift through alerts, augmenting workloads and adversely affecting efficiency. The end result is an increased number of alerts for triage, but not necessarily heightened efficiency in the SOC.
- Co-pilots (Chatbots): Co-pilot platforms such as ChatGPT and add-on chatbots can provide relevant information to users, yet decision-making and implementation are left to the user. Users must seek information, interpret outcomes, and draft action plans. This technology is typically employed in the SOC for post-detection operations.
- While co-pilots boost productivity by simplifying data interaction, they assist in posing questions, interpreting outputs, and planning execution. This technology is generally utilized in the SOC for tasks post-detection operations.
still depend on human intervention for driving the entire procedure. The SOC analyst needs to commence inquiries, interpret outcomes, merge them into actionable schemes, and then carry out the essential response maneuvers.
- Agentic AI: This goes past support by working as an independent AI SOC analyst, concluding entire workflows. Agentic AI imitates human techniques, from alert comprehension to decision-making, delivering fully performed work components. This technology is generally employed in the SOC for post-detection tasks. By furnishing completely executed alert triages or incident examinations, Agentic AI enables SOC teams to concentrate on higher-level decision-making, leading to notable productivity enhancements and considerably more efficient operations.
After obtaining clear explanations of multiple widespread AI applications in the SOC, it becomes essential to note that a specific solution might incorporate several, or even all of these types of technology. For instance, Agentic AI solutions frequently involve a chatbot for threat hunting and data exploration purposes, and analytic patterns for utilization in analysis and decision-making.
Operational Mechanism of Agentic AI in SOC Automation
Agentic AI transforms SOC automation by controlling the triage and examination procedures before alerts ever get to human analysts. Whenever a security alert is produced by a detection product, it initially goes to the AI instead of directly to the SOC. The AI then reproduces the investigatory methods, processes, and decision-making routines of a human SOC analyst to entirely automate triage and investigation. Following completion, the AI delivers the outcomes to human analysts for scrutiny, allowing them to emphasize strategic decisions instead of operational duties.
The sequence kicks off with the AI interpreting the significance of the alert using a Large Language Model (LLM). It converts the alert into a sequence of security suppositions, delineating what might potentially be taking place. To broaden its analysis, the AI fetches data from outer origins, such as threat intelligence feeds and behavioral context from analytic designs, appending valuable context to the alert. Based on this data, the AI dynamically selects particular examinations to authenticate or nullify each supposition. Once these examinations are concluded, the AI evaluates the outcomes to either come to a conclusion on the alert’s maliciousness or restart the sequence with freshly gathered data until a definite conclusion is achieved.
Upon finishing the examination, the AI merges the findings into an elaborate, human-readable report. This report encompasses a judgment on the alert’s maliciousness, a digest of the incident, its range, a root cause examination, and an action scheme with prescriptive instructions for containment and remediation. This exhaustive report equips human analysts with all they need to quickly comprehend and assess the incident, drastically diminishing the time and effort needed for manual investigation.
Agentic AI also provides advanced automation capacities through API integrations with security tools, enabling it to execute response maneuvers automatically. Following a human analyst reviewing the incident report, automation can resume in either a semi-automated mode—where the analyst clicks a button to initiate response workflows—or a fully automated mode, where no human intervention is needed. This adaptability allows establishments to balance human oversight with automation, maximizing both efficiency and security.
Can We Genuinely Count on AI for SOC Automation?
A recurrent query in the security sector is, “Is AI prepared?” or “How can we have faith in its precision?” Here are crucial motives why the agentic AI approach can be trusted:
- Comprehensiveness of Work: While human analysts can conduct exhaustive investigations, time constraints and hefty workloads often hinder these efforts from being comprehensive and frequently carried out. Agentic AI, on the contrary, can apply a wide array of investigatory techniques to every alert it manages, ensuring a more thorough investigation. This augments the chances of detecting the proof necessary to affirm or dismiss an alert’s maliciousness.
- Precision: Modern AI is fueled by a collection of specialized, mini-agent LLMs, each concentrating on a narrow sector—whether it’s security, IT infrastructure, or technical writing. This concentrated approach allows the agents to hand off work among each other, akin to microservice architectures, averting issues like hallucination. With precision rates in the high 90%, these AI agents often surpass humans in repetitive tasks.
- Behavioral Inspection: AI excels in deploying behavioral modeling during triage and investigation. Unlike human analysts, who might lack the time or expertise to conduct intricate behavioral study, AI consistently learns standard patterns and compares suspicious activities against baselines for users, entities, peer groups, or whole organizations. This enhances the precision of its findings and results in more dependable conclusions.
- Transparency: AI SOC analysts uphold an elaborate record of every action—each question asked, test performed, and result attained. This data is effortlessly accessible through user interfaces, frequently supported by chatbots, simplifying the process for human analysts to scrutinize the findings. Every conclusion and proposed action is underpinned by data, regularly cross-referenced with industry security frameworks like MITRE ATT&CK. This degree of transparency and auditability is seldom achievable with human analysts owing to the time required to document their work at that scale.
To sum up, agentic AI provides a more in-depth, precise, and transparent technique to SOC automation, granting security teams a high level of trust in its capabilities.
4 Key Benefits of an Agentic AI Approach to SOC Automation
By embracing an agentic AI approach, SOCs can realize notable benefits that boost both operational efficiency and team spirit. Here are four key merits of this technology:
- Discovering More Attacks with Existing Detection Signals: Agentic AI assesses every alert, correlates data across sources, and conducts thorough examinations. This empowers SOCs to pinpoint the detection signals representing actual attacks, uncovering threats that might have been overlooked.
- Curtailing MTTR: By eradicating the manual bottleneck of triage and examination, Agentic AI enables remediation to occur swiftly. What previously consumed days or weeks can now be dealt with in minutes or hours, significantly reducing mean time to respond (MTTR).
- Enhancing Productivity: Agentic AI makes it feasible to evaluate every security alert, a task that would be unfeasible for human analysts at scale. This liberates analysts from repetitive duties, permitting them to concentrate on more intricate security ventures and strategic undertakings.
- Escalating Analyst Morale and Retention: By handling the repetitive triage and examination responsibilities, Agentic AI transforms the role of SOC analysts. Instead of conducting dreary, monotonous duties, analysts can concentrate on reviewing reports and engaging in high-value initiatives. This alteration boosts job contentment, aiding in retaining skilled analysts and improving general morale.
These advantages not only streamline SOC operations but also facilitate teams to operate more efficiently, enhancing both threat detection and the general job fulfillment of security analysts.
About Radiant Security
Radiant Security stands as the leader and pioneer in AI SOC analysts, utilizing generative AI to simulate the proficiency and decision-making routines of top-tier security professionals. With Radiant, alerts undergo analysis by AI prior to arriving at the SOC. Every alert undergoes several dynamic tests to ascertain maliciousness, presenting decision-ready outcomes in merely three minutes. These outcomes encompass a comprehensive incident summary, root cause analysis, and a response blueprint. Analysts can respond manually, with step-by-step AI-generated instructions, employ single-click responses via API integrations, or select entirely automated responses.
Wish to discover more?
Arrange a demo with Radiant to learn more about how an AI SOC analyst can supercharge your SOC.
