A recent survey conducted by HackerOne, a platform for security research, reveals that 48% of security professionals consider AI as the most significant security threat to their organization. Their primary concerns regarding AI include:
- Disclosure of training data (35%).
- Unauthorized utilization (33%).
- External hacking of AI models (32%).
These apprehensions underscore the critical need for companies to review their AI security strategies promptly to prevent vulnerabilities from evolving into actual threats.
AI often leads to false positives for security teams
Although the complete Hacker Powered Security Report will not be released until later this autumn, additional research from a SANS Institute report sponsored by HackerOne points out that 58% of security professionals believe that security teams and malicious actors might engage in an “arms race” to exploit generative AI tactics and methodologies.
According to the SANS survey, security experts have successfully employed AI to automate repetitive tasks (71%). Nevertheless, they acknowledged that threat actors could misuse AI to streamline their operations. The survey participants were primarily anxious about AI-driven phishing campaigns (79%) and automated exploitation of vulnerabilities (74%).
EXPLORE: Security leaders are growing increasingly frustrated with AI-generated code.
“Security teams need to identify the most suitable ways to leverage AI to outwit adversaries while also recognizing its existing constraints, or else they risk doubling their workload,” remarked Matt Bromiley, an analyst at the SANS Institute, in a press release.
The remedy? External evaluation for AI deployments. A significant majority (68%) of the surveyed individuals opted for an “external review” as the most efficient method to pinpoint AI-related safety and security concerns.
“Teams now have a more practical view of AI’s current limitations compared to last year,” observed Dane Sherrets, Senior Solutions Architect at HackerOne, in an email to TechRepublic. “Humans bring indispensable context to both defensive and offensive security that AI cannot yet replicate fully. Issues like hallucinations have made teams hesitant to implement the technology in critical systems. Nonetheless, AI remains invaluable for enhancing efficiency and handling tasks that do not necessitate profound context.”
Additional insights from the SANS 2024 AI Survey, published this month, indicate:
- 38% are planning to integrate AI into their security strategies in the future.
- 38.6% of respondents have encountered inadequacies when utilizing AI to identify or respond to cyber threats.
- Legal and ethical implications pose a challenge to 40% of AI adopters.
- 41.8% of organizations have faced resistance from employees who distrust AI decisions, attributed by SANS to a “lack of transparency.”
- 43% of firms currently incorporate AI in their security strategies.
- AI technology in security operations is predominantly used in anomaly detection systems (56.9%), malware detection (50.5%), and automated incident response (48.9%).
- 58% of participants noted the struggle of AI systems in detecting novel threats or responding to atypical indicators due to insufficient training data, as identified by SANS.
- Among those who reported deficiencies when using AI for threat detection or response, 71% mentioned instances of false positives generated by AI.
Anthropic invites insights from security researchers on AI security practices
Anthropic, a generative AI developer, expanded its bug bounty program on HackerOne in August.
Specifically, Anthropic encourages the hacker community to stress-test “the preventive measures we implement to deter misuse of our models,” including attempts to breach the safeguards in place to prevent AI from offering instructions for explosives or cyber assaults. Anthropic has announced rewards of up to $15,000 for those who successfully identify new jailbreaking attacks and will grant HackerOne security researchers early access to its upcoming safety mitigation system.
