IRONSCALES Adds Three AI Agents Trained to Automate Cybersecurity Tasks
IRONSCALES today revealed it has developed three artificial intelligence (AI) agents for its email security platform, including one that conducts red team attacks to uncover vulnerabilities and weaknesses that adversaries can exploit.
Why Threat-Led Defense Should Be on Every CISO’s Priority List in 2026
IRONSCALES today revealed it has developed three artificial intelligence (AI) agents for its email security platform, including one that conducts red team attacks to uncover vulnerabilities and weaknesses that adversaries can exploit.Audian Paxson, principal technical strategist for IRONSCALES, said with the Winter 2026 release of the company’s platform, cybersecurity teams will also be able to leverage two other AI agents to simulate phishing attacks and analyze suspicious emails.The Winter release also includes email encryption for outbound data that can be applied based on policies or the sensitivity of a specific workflow, as well as enhancements to the deepfake protection for Microsoft Teams that compares voice patterns to identify when employees are being impersonated using AI.Collectively, these capabilities will enable cybersecurity teams to both prevent issues from arising in the first place and resolve them faster when they inevitably do arise, added Paxson.Rather than relying on a general-purpose large language model (LLM) to provide these capabilities, IRONSCALES opted to build a smaller AI model that it specifically trained to perform these tasks, noted Paxson.The Red Teaming Agent uses that capability to perform the same reconnaissance any attacker would, including scanning social media, press releases, and job postings to map your exposure. Additionally, the Red Team AI agent has also been trained to monitor tactics and techniques that adversaries are developing, which are then incorporated into stress tests the agent is capable of launching against an email system, said Paxson. The Red Teaming AI agent then surfaces those findings to enable cybersecurity teams to harden detection before there is an actual attack.The overall goal is to enable organizations to achieve a higher level of resiliency in a Phishing 3.0 era in which adversaries are taking advantage of AI to launch more sophisticated attacks, said Paxson.Fernando Montenegro, vice president and practice lead for cybersecurity and resilience at The Futurum Group, said these latest updates from IRONSCALES are a clear example of the dual nature of AI. The same technology that, for example, increases the likelihood of deepfake attacks is the one that, applied to defense, can yield significant progress in shoring up defenses, he added.“The use of specialized AI models also enables vendors to provide functionality in a way that more efficiently encodes their expertise on this particular domain,” noted Montenegro.It’s not clear at what pace cybersecurity teams are adopting AI to combat increasingly sophisticated threats. The hope is that as the frequency of these attacks increases, cybersecurity teams will be able to minimize, if not reduce, the level of stress they currently experience at a time when many organizations are not increasing staff.In the meantime, cybersecurity teams would be well advised to continue to identify mundane rote tasks that might be better assigned to be performed first by an AI agent. However, the important thing to remember is to review those recommendations before acting on them.
