Back to Blog
Exaforce
Industry
October 9, 2025

GPT needs to be rewired for security

How a deterministic, multi-model engine delivers reliable SOC automation outcomes, including real-time triage, fewer false positives, and reduced MSSP/MDR dependence.

This article originally appeared in Help Net Security.

LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and even planning travel. But in the SOC (where mistakes have real cost), today’s models stumble on work that demands high precision and consistent execution across massive, real-time data streams. Until we close this reliability gap at scale, LLMs alone won’t automate the majority of SOC tasks.

Humans excel at framing ambiguous problems, making risk-aware judgments, and applying domain intuition, especially when signals are weak, conflicting, or novel.

Machines excel at processing high-volume, high-velocity, unstructured data with low marginal cost. They don’t tire, forget, or drift, an ideal complement for threat detection. However, triage has stayed human because machines (and LLMs) cannot truly substitute a human when it comes to context assembly, hypothesis generation, and business-risk judgment.

We’re still in the early innings of SOC automation. To keep pace as attackers weaponize AI-driven automation, we must reach an equilibrium where machines thwart most attacks at machine speed. Without that balance, defenders fall behind fast.

What Breaks LLMs Inside a SOC

To shift toward machine-speed defense, the platform must overcome the current limits of LLMs:

  1. Real-time ingestion at scale
    Continuous processing of fast-accumulating data (logs, EDR, email, cloud resources, identity, code, files, and more) without lag or loss.
  1. Large, durable context
    Retain and retrieve very large, long-lived knowledge (asset inventories, baselines, case history) to correctly interpret actions and sequences over time.
  1. Low-latency, low-cost execution
    Perform filtering, correlation, enrichment, and reasoning at the rate of incoming data, and do it at very low cost, so it scales with the enterprise.
  1. Deterministic logic
    Follow a chain of thought over large datasets to arrive at results that are repeatable, explainable, and understandable (not fickle or opaque).
  1. Consistency of reasoning
    Deliver calibrated, repeatable logic with a spread that matches the margin of disagreement between two humans, not the volatility of a generative model.

What Fixes It: Rethinking the Stack

The path forward isn’t “more prompts.” It’s a new type of model that solves these problems and enables LLMs to be suitable for the SOC use case.

Anybody who has done anything useful with AI/ML knows that models need high quality data. We must move beyond log-centric SIEMs, as high quality threat detections and investigations require a lot more data than what logs can provide. Consider a sensitive action, say, a change to file permissions, from an unusual location in a business-critical application. The event record isn’t enough; we need to know the file type, its labels, the role and permissions of the user who made the change, who created the file, and more.

To enable that, we need a real-time data warehouse that ingests and correlates not just logs but also identities, configurations, code, files, and threat intelligence. On top of this warehouse, we should run an AI engine capable of processing all this real-time data with human-grade reasoning.

One approach for the AI engine will be to build a multi-model AI engine - a pipeline that combines semantic reasoning, behavioral analytics, and large language models. Semantic understanding and statistical ML handle the heavy lifting on high-volume data via low-latency pipelines, narrowing the slice that LLMs must correlate and reason over. Result: reliable reasoning at scale without blowing up latency or cost.

Another advantage of this approach is that while the real-time data warehouse is critical for continuous training of the AI engine to detect threats and triage alerts, it can also be used as a long-term warehouse for visibility and forensics – effectively replacing the legacy SIEMs with a much more modern data platform - welcoming us to the SIEMless future of an AI-driven SOC!   

How Will This Change the SOC?

  1. Threat Detection (Detection Engineers)
    In most organizations, this rare, specialized role evolves from writing brittle rules and tuning UEBA to designing adaptive systems. Instead of crafting detections for individual indicators or signatures, engineers steer AI-driven models that continuously correlate signals across logs, identities, configurations, and code repositories. The focus shifts from rule authoring to threat modeling and feedback loops that keep detections accurate over time.
  1. Alert Triage (SOC Analysts / Tier-1 & 2)
    Triage has long been dominated by repetitive enrichment, correlation, noise reduction, chasing the users in IT or DevOps for confirmation, etc. With our advanced AI engine plus human oversight, most of this work becomes automatable. Our triage bots (Exabots) work in tandem with human analysts to dramatically increase productivity. Over time, staffing models change: less dependence on large Tier-1/Tier-2 teams or outsourced MSSPs for 24/7 coverage, lower overall costs, and the ability to detect and triage many more alerts.
  1. Threat Hunters
    Hunting is where human intuition matters most, but it’s often throttled by slow queries, fragmented tools, and incomplete data. With a modern data architecture and the AI engine above, hunters can query correlated, context-rich information in real time, assisted by automated agents that surface anomalies and assemble timelines. Instead of spending hours gathering evidence, hunters program agents to test hypotheses, run adversary emulation, and pursue weak signals creatively, shifting from reactive casework to proactive defense.

Where This Goes Next

We’re confident that modern data architecture plus an evolved AI model can overcome many of today’s LLM limitations and progressively reduce the human oversight needed to get dependable outcomes from agentic systems. This matters most for mid-size companies, who must deliver enterprise-grade security without enterprise-size budgets or headcount. Done right, machines democratize security, letting many organizations leapfrog legacy architectures and their constraints.

In short: LLMs are a breakthrough, but security needs a modified brain, one built for real-time, durable context, deterministic logic, and consistent reasoning at scale. That’s what we are working on at Exaforce.

Table of contents

Share

Recent posts

Industry

October 9, 2025

GPT needs to be rewired for security

Product

October 8, 2025

Aggregation redefined: Reducing noise, enhancing context

News

Product

October 7, 2025

Exaforce selected to join the 2025 AWS Generative AI Accelerator

Research

October 2, 2025

Do you feel in control? Analysis of AWS CloudControl API as an attack tool

News

September 25, 2025

Exaforce Named a Leader and Outperformer in the 2025 GigaOm Radar for SecOps Automation

Industry

September 24, 2025

How agentic AI simplifies GuardDuty incident response playbook execution

Research

September 10, 2025

There’s a snake in my package! How attackers are going from code to coin

Research

September 9, 2025

Ghost in the Script: Impersonating Google App Script projects for stealthy persistence

Customer Story

September 3, 2025

How Exaforce detected an account takeover attack in a customer’s environment, leveraging our multi-model AI

Industry

August 27, 2025

s1ngularity supply chain attack: What happened & how Exaforce protected customers

Product

News

August 26, 2025

Introducing Exaforce MDR: A Managed SOC That Runs on AI

News

Product

August 26, 2025

Meet Exaforce: The full-lifecycle AI SOC platform

Product

August 21, 2025

Building trust at Exaforce: Our journey through security and compliance

Industry

August 7, 2025

Fixing the broken alert triage process with more signal and less noise

Product

July 16, 2025

Evaluate your AI SOC initiative

Industry

July 10, 2025

One LLM does not an AI SOC make

Industry

June 24, 2025

Detections done right: Threat detections require more than just rules and anomaly detection

Industry

June 10, 2025

The KiranaPro breach: A wake-up call for cloud threat monitoring

Industry

May 29, 2025

3 points missing from agentic AI conversations at RSAC

Product

May 27, 2025

5 reasons why security investigations are broken - and how Exaforce fixes them

Product

May 7, 2025

Bridging the Cloud Security Gap: Real-World Use Cases for Threat Monitoring

News

Product

April 17, 2025

Reimagining the SOC: Humans + AI bots = Better, faster, cheaper security & operations

Industry

March 16, 2025

Safeguarding against Github Actions(tj-actions/changed-files) compromise

Industry

November 6, 2024

Npm provenance: bridging the missing security layer in JavaScript libraries

Industry

November 1, 2024

Exaforce’s response to the LottieFiles npm package compromise

Explore how Exaforce can help transform your security operations

See what Exabots + humans can do for you