Generative AI: The Double-Edged Sword of Cybersecurity
The AI hacking era is here – and it’s reshaping the human risk landscape as we know it.
Generative AI: The Double-Edged Sword of Cybersecurity
The AI hacking era is here – and it’s reshaping the human risk landscape as we know it. With good and bad actors increasingly using generative AI (GenAI) to deploy and defend against attacks, organizations must understand the root of it all: People. The 2025 Data Breach Investigations Report by Verizon found that 60% of cybersecurity breaches involved a human element. Seeing how easily mistakes are made, AI is positioned to take advantage of human flaws. Despite most organizations being aware of this, 81% are concerned about GenAI leading to sensitive data leaks –over half (55%) aren’t prepared with strategies for AI-driven threats, showing a significant gap in readiness.
It’s abundantly clear that AI is drastically changing the human risk landscape. To close that readiness gap, it’s critical to work directly with people to develop strategies and adapt accordingly as GenAI’s role continues to evolve in cybersecurity. GenAI Supercharges Human-Centric Attacks GenAI’s use in cyberattacks has gone far beyond what could have been anticipated, allowing hackers to “supercharge” their attacks. As the rise of GenAI tools has made them widely accessible at a low cost, hackers are no longer facing a barrier to entry. For example, they don’t need advanced coding expertise to develop malware to launch prominent, advanced attacks. A striking way hackers are weaponizing GenAI is by further advancing social engineering attacks. Through tactics like custom phishing, voice deepfakes, and synthetic personas, GenAI can make attacks feel alarmingly real. One of the most effective examples centered around a deepfake audio call. The attacker took on the persona of a CFO, calling the company’s finance team. The call had every aspect one would assume of a real emergency: Panicked tone, sense of urgency, and emotional cues that humans have. Sure enough, the deepfake had everything to be effective enough for the victim to push through a wire transfer to whom they thought was the CFO. The advancement of GenAI comes with many dangers, as it’s moving from static scams to dynamic manipulation, opening a new door for attackers. Despite the real-world impacts we’re already seeing from GenAI, not all its use cases are negative. The Flipside – Enhancing the Human Defense Layer Despite all the fears AI-generated attacks bring, GenAI’s role in cybersecurity is more than just a threat. When it comes to defense, AI tools equip humans with the support they need to strengthen the human defense layer. GenAI helps tackle the signal-to-noise problem that can make attacks difficult to detect. Security teams are constantly drowning in alerts, data logs and fragmented signals, giving them little time to conduct thorough analyses. In tandem, advanced AI techniques have the ability to reduce humans’ cognitive loads significantly by summarizing, clustering, and prioritizing incoming data, pinpointing the problem more efficiently. These tools can also be used to understand the people behind the computer, not just endpoints. Advanced AI techniques can be used to pick up shifts in communication behaviors such as sudden tone changes, message sentiment, stress signals, and even anomalies in who’s talking to whom and how, warning security teams of potential internal human risks. When combining the identification of communication shifts with intent detection in messages, security teams now have a real-time sense of human exposure, giving them the ability to shut threats down before they arise. Practical Steps for CIOs and CISOs It’s clear AI-enabled attacks and defenses are already making an impact, and you can’t risk waiting for the perfect framework to begin protecting your organization. You need to encourage security leaders to start small and move fast. The first place to start? Using GenAI and other advanced AI techniques yourself. By testing GenAI’s limits and understanding the ins and outs, it’s easier to know what to look out for and how to train your employees to detect AI-generated content in the most effective way. Treating AI like the unknown is a losing game – you must be AI literate to protect your company. There’s no way to predict when you’ll be hit with an AI-powered attack, but preparation for the inevitable is key. First and foremost, train your team to be wary of the types of requests that come with these advanced attacks. Whether it’s phishing, spoofing, smishing, etc., knowing what to look for with each type is crucial to maintaining a secure enterprise environment. Additionally, leaning into AI solutions as a defense layer is vital. While most of the talk is around AI’s threat, organizations can forget to use GenAI and other AI tools to combat the same threat. Whether it’s flagging suspicious activity or stress signals, using advanced AI defenses is crucial to enhancing human firewalls. Shifting from Reactive to ProactiveGenAI is a multiplier of both risk and resilience. As GenAI continues to evolve across the cyber threat and defense landscape, leaders must ensure their people are equipped for the future and have the skills needed to adapt the security lens accordingly. The only way to keep up with AI is to use it yourself. Begin by embracing experimentation and cross-functional AI literacy to secure the human layer before internal employees become a critical failure point in your organization.
