New IBM Security Services Aim to Counter Security Risks of AI Frontier Models
IBM wants to use AI agents to help organizations assess their ability to protect themselves from the cybersecurity risks that increasingly advanced AI frontier models pose when they’re used by threat actors.
[un]prompted 2026 – The Al Security Larsen Effect: How To Stop The Feedback Loop
IBM wants to use AI agents to help organizations assess their ability to protect themselves from the cybersecurity risks that increasingly advanced AI frontier models pose when they’re used by threat actors.The IT giant’s consulting unit this week unveiled IBM Autonomous Security, a collection of specialized and coordinated agents that Big Blue executives said will wend their way through an enterprise’s often-sprawling security stack and enable it to work more as a singular system than a collection of tools that work on their own.The agents are charged with a range of tasks, from analyzing weaknesses in software that expose them to cyber risks, find exploit paths in runtime environments, bolster security practices and enforce security policies in security tools, detect anomalies, and contain cyberthreats.At the same time, IBM Consulting is offering a new cybersecurity assessment service that can detail security weaknesses in an organization’s IT environment, such gaps in security, exposures in AI policies, and potential paths bad actors can use to exploit such weaknesses. They’ll spot security issues – including attacks – and respond to them.In addition, the service will offer enterprises mitigation guidance that detail the priorities and show how they can more quickly detect and respond to agentic-based threats by enhancing automation in their operations and improving their architectural alignment.Frontier Models’ Dual UsesFrontier models are raising both the cybersecurity capabilities for organizations and the cyberthreat risks they create, including accelerating the speed of attacks and lowering the skill level needed by bad actors to launch sophisticated, automated campaigns.“Frontier AI offers significant promise for cybersecurity, including accelerating vulnerability discovery and patching, optimizing defensive systems, and enhancing threat detection capabilities,” the Frontier Model Forum, an industry group launched three years ago by Microsoft, Google, Anthropic, and OpenAI that promotes the safe and responsible development of frontier models, wrote in February. “However, these same capabilities create dual-use risks, potentially lowering barriers for malicious actors to exploit known vulnerabilities or discover new attack vectors. As AI capabilities advance, it is crucial to develop robust risk management frameworks that maximize security benefits while proactively addressing emerging risks.”Mark Hughes, global managing partner of cybersecurity services for IBM Consulting, echoed the sentiment, saying in a statement that “frontier models are creating a new category of enterprise threat that is fast moving, systemic and increasingly autonomous. Meeting that threat requires a systemic defense.”Illustrating the DangersThe cybersecurity industry this month got hard lessons in what frontier models are capable of. Anthropic executives last week unveiled Claude Mythos Preview, a general-purpose frontier model that they wrote was “strikingly capable at computer security tasks.”In particular, the model is particularly good at identifying software vulnerabilities, detecting some that have gone undetected for more than two decades. That said, it’s equally as good at autonomously creating exploits for the security flaws, so good that Anthropic executives is limiting its release to particular users.“Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” they wrote.The vendor also is using it to create guardrails that will be used in an upcoming version of its Claude Opus model that won’t pose the same level of risk as Mythos. In addition, Mythos Preview is being used as the foundation of Project Glasswing, an initiative launched with Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks.The companies will use Mythos Preview in their work developing defensive security technologies. In addition, more than 40 other organization that build or maintain key software infrastructure will use it to scan and secure systems running their first-party and open software.OpenAI and GPT-5.4-CyberThis week, OpenAI introduced GPT-5.4-Cyber, a cybersecurity-focused variant of its GPT-5.4 model that also will be limited in its release through its Trusted Access for Cyber (TAC) program for similar reasons.“Our goal is to make these tools as widely available as possible while preventing misuse,” OpenAI executives wrote. “Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.”
