The Dual Role of AI in Cybersecurity: Shield or Weapon?

[embedded content]
Artificial intelligence isn’t just another tool in the security stack anymore – it’s changing how software is written, how vulnerabilities spread and how long attackers can sit undetected inside complex environments.

[…Keep reading]

Google is now the best at AI — but is it enough?

Google is now the best at AI — but is it enough?

[embedded content]

Artificial intelligence isn’t just another tool in the security stack anymore – it’s changing how software is written, how vulnerabilities spread and how long attackers can sit undetected inside complex environments. Security researcher and startup founder Guy Arazi unpacks why AI has become both a powerful defensive accelerator and a force multiplier for adversaries, especially in application security.
Arazi traces the problem back to basic economics: Organizations are shipping code faster than ever with AI-assisted development, but product and AppSec teams haven’t grown at the same pace and are still using tools built for a pre-AI era. Even before generative AI, those teams were drowning in vulnerability backlogs and false positives. Now, AI agents can copy and reuse insecure patterns across dozens or hundreds of services, turning individual bugs into systemic weaknesses that nobody fully understands.
That asymmetry is amplified by the way attackers operate. Their KPI is simple—compromise or not—and they’re willing to play the long game, quietly weaponizing public research, bug bounty reports and one-off disclosures into broad campaigns. A single published exploit path can become a blueprint for probing every similar feature and service an organization runs, especially when internal defenses don’t consistently apply “defense in depth” beyond internet-facing surfaces.
Arazi argues that defenders need to rethink both prioritization and how they use AI themselves. High-impact, proven exploit paths—whether found by internal engineers, pen testers or external researchers—should be treated as critical signals and hunted across the entire codebase, not fixed in isolation. At the same time, teams should lean on AI to encode local rules, automate pull-request reviews and reduce repeat mistakes, while still relying on human experts to validate what the models miss or misjudge.
We may be at the “beginning of the beginning” for AI in security, Arazi says, but the gap between how quickly AI can introduce risk and how slowly enterprises adapt is already here. The job now is to close that gap before adversaries do it for us.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.