Human Risk Report Reveals Overconfidence in Phishing Defenses

Image: Mykyta/Adobe Stock

Being confident is cool, but overconfidence is another matter.

Human Risk Report Reveals Overconfidence in Phishing Defenses

Human Risk Report Reveals Overconfidence in Phishing Defenses

Being confident is cool, but overconfidence is another matter.

A new report reveals that despite continued confidence in cybersecurity defenses, everyday employee behaviors — from phishing errors to risky AI practices — remain a leading cause of data breaches.

Arctic Wolf, a security operations firm, has released its second annual Human Risk Behavior Snapshot, an independent survey of more than 1,700 IT leaders and end users worldwide.

“In my experience as an FBI agent and security leader, I’ve found that technology alone does not keep us safe. The human element, including our behaviors, our habits, and our decisions, is an ever-present and unpredictable variable in our layers of security,” said Arctic Wolf SVP and CISO Adam Marrè in a blog post.

Nightm(ai)res and malicious links

As cyber threats grow and generative AI becomes ingrained in daily workflows, the human factor is emerging as one of the most unpredictable aspects of cybersecurity. Leaders’ overconfidence and employees’ tendency to sidestep security measures are widening the gap between perceived protection and actual vulnerability. The Human Risk Behavior Snapshot aims to help organizations identify and manage these people-driven risks.

According to the 2025 report, 68% of IT leaders said their organization suffered a breach in the past year, up 8% from 2024. Australia, New Zealand, and the U.K. and Ireland saw the sharpest increases. Nearly two-thirds of IT leaders and half of employees admitted to clicking on malicious links, yet three-quarters of leaders still believe their organizations are safe. One in five leaders who clicked did not report the incident. Senior leadership teams remain high-value targets, with 39% facing phishing attempts and 35% encountering malware infections.

The rise of generative AI is compounding data risks. The survey found that 80% of IT leaders and 63% of employees use AI tools for work, with 60% of leaders and 41% of staff acknowledging they have entered confidential data into these platforms. Meanwhile, only 54% of organizations enforce multifactor authentication for all users, leaving entry-level accounts exposed.

The report also highlights a growing divide in how organizations respond to human error. “Training beats termination,” it notes, with 77% of IT leaders saying they would fire staff who fall for scams — up from 66% last year. However, companies that focus on corrective training see an 88% reduction in risk.

“The rise of generative AI has created powerful new tools — but also powerful new risks,” adds Marrè. “When leaders are overconfident in their defenses while overlooking how employees actually use technology, it creates the perfect conditions for mistakes to become breaches. Progress comes when leaders accept that human risk is not just a frontline issue but a shared accountability across the organization. Reducing that risk means pairing stronger policies and safeguards with a culture that empowers employees to speak up, learn from errors, and continuously improve.”

A new report from BioCatch, a behavioral biometrics company specializing in financial crime prevention, reveals a dramatic global surge in banking scams — up 65% over the past year.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.