Human-only SOCs are unsustainable, but AI-only SOCs are still well out of reach of current technology.
The industry has answered by increasingly adopting hybrid approaches.
Today, hybrid SOCs are the method of choice for teams looking to leverage the capabilities of AI while keeping their feet firmly on the ground. Humans at the controls. AI doing the boring work. Everything coming together—but faster, more accurately, and with a sense of judgement at the helm.
Meet the hybrid SOC – a model where AI agents answer to humans – and find out why these half-human, half-machine teams are redefining cybersecurity.
Losing Time in Human-Led Investigations
Gartner predicts that by 2026, over half of all SOCs will be using some type of AI-based decision-support.
It’s not that people aren’t smart enough anymore, or even that the landscape is “too complex” for analysts to hunt down today’s problems. The issue is scale, and often scale alone.
The average human-led investigation takes roughly 10-20 minutes per alert (with some estimates putting it at 30-60 minutes).In a world where SOCs deal with hundreds (if not thousands) of alerts per day, even narrowing things down to high-priority issues still leaves teams with dozens of investigations to get to.
This would be difficult for a SOC of any size, even if it was fully staffed (and those analysts had nothing else to do).
But when AI is added into the mix, things change. As noted by Prophet Security, a leading provider of AI SOC solutions, when AI is thrown into the mix, “median time to investigate drops from 30-plus minutes to under five” and “investigation coverage extends to 100% of alerts rather than the fraction most teams can manually review.”
This completely changes the game. Here’s how.
What AI Brings to the Table in Investigations
AI alone is powerful. But these days, agentic AI is being used to do what AI does and then some.
In a hybrid SOC scenario, agentic AI – the kind that thinks and reasons for itself with human prompts – is used in an intern-like capacity. Imagine a very good, very accurate newbie that doesn’t tire and does exactly what you say, exactly when you say it. That’s agentic AI.
You get:
- Autonomous Investigations: AI agents gather data, correlate evidence, and come to conclusions for every alert. Is this a false positive? Is this a viable attack path? Is this worth escalating? All stones overturned; nothing gets missed.
- Resolution, Not Guesswork: Instead of closing out incidents with a “probability” of being benign, agentic AI agents go the full mile and make sure every single one leads nowhere. Then they close it out.
- Context and Audit Trails: Alerts come pre-prioritized and enriched with context from around the environment. AI agents not only assemble telemetry from other tools; they go one step further and examine forensics on good leads. And they record every step.
These capabilities are what human analysts would be doing anyway, but on nights, weekends, and on alert 942 of the day. Pair this with unmatched speed and accuracy, and you see why SOCs need an AI-supported approach.
Where Do the Humans Come In?
These automated, autonomous capacities can make it seem like SOCs can be fully run by AI. Not yet.
Humans are still needed at the top, making the decisions, and green-lighting the playbooks and policies. We go from doing route tasks (like triaging and querying data) to only the “big brain” stuff: judgment, validation, and final decision-making.
This doesn’t just keep humans “in the loop,” but at the helm.
Speaking to this point, Avani Desai, EO at cybersecurity firm Schellman, said that she is a “big believer that human-in-the-loop is not enough when we’re talking about truly agentic AI.”
Instead, she is in favor of human-in-command setups. “You don’t just supervise, you design control systems and guardrails,” she states.
This is what is enabled in a truly hybrid SOC.
Empowering Employees with AI-Enabled Answers
And then there’s the benefit of fast lookup and fast answers. There’s a skills gap between where most SOCs are and where they need to be. That gap existed before AI, and it’s even wider now.
But with Natural Language Queries (NLQs), AI is, ironically, helping us catch up. A mid-tier analyst could be looking at a sophisticated attack path (provided to her by their AI SOC platform) and not be able to entirely connect the dots.
She could ask, “Walk me through it,” and the AI would summarize in plain language what’s going on, along with remediation steps. The analyst would still be in charge of making the decisions, deploying the bots, and overseeing the task. But the AI would be instrumental in getting her there.
Auto-Documentation Streamlining Human Decisions
Reporting is a necessary evil among analysts, and one that can also be made lighter by the AI half of a hybrid SOC.
Good AI SOC platforms don’t operate on a “black box” model; they show their work. They keep track of what they did and maintain a paper trail for auditors. This not only helps in an audit but also gets all stakeholders on the same page during investigations.
CEOs and executives get a high-level view of the problem. CISOs and managers get a report that’s more technically in-depth. And boots-on-the-grounders and auditors can get one to whatever level of fine-toothed detail they require.
Again, humans dictate the parameters of the reports. AI working and tracking constantly in the background produces them.
Keeping Humans at the Helm
Hybrid SOCs see the dangers of dumping modern cybersecurity demands squarely on either humans (underpowered) or machines (overpowered and dangerous).
You need a mix of both, with humans in the lead to set the stage, implement the guidelines, establish the boundaries, and make the final judgment calls.
As Nikki Webb, director at Custodian360 and AI SOC user, says, “The future is not about replacing people with AI, it is about AI supporting people. Analysts must stay at the center of SOC operations, because only humans can truly separate noise from risk.”