Do the security advantages of generative AI surpass the downsides? Only 39% of security experts believe the benefits overshadow the potential hazards, according to a recent study conducted by CrowdStrike.
In a study carried out in 2024, CrowdStrike interviewed 1,022 security analysts and professionals from the U.S., APAC, EMEA, and other global areas. The results illustrated that cybersecurity professionals are greatly worried about the difficulties linked with AI. While 64% of those surveyed have either acquired tools for generative AI for their work or are in the process of researching them, most remain cautious: 32% are still in the exploration phase, with only 6% actively utilizing the tools.
What are security experts looking for in generative AI?
As per the report:
- The top drive behind adopting generative AI is not about tackling a skills deficit or fulfilling management requirements — it is about enhancing the capacity to react to and safeguard against cyber threats.
- Cybersecurity professionals are not particularly attracted to AI for general purposes. Instead, they desire generative AI combined with security proficiency.
- 40% of respondents indicated that the benefits and drawbacks of generative AI are “similar.” Meanwhile, 39% stated that the advantages outweigh the risks, and 26% mentioned that the benefits do not.
“Security teams wish to implement GenAI as part of a system to derive greater value from existing tools, enrich the analyst experience, hasten the onboarding process, and eliminate the intricacy of integrating new isolated solutions,” the report highlighted.
Evaluating ROI has been a persisting issue when bringing in generative AI tools. CrowdStrike identified quantifying ROI as the primary economic worry among participants. The subsequent two top-rated concerns were the expenses related to the licensing of AI tools and the unclear or misleading pricing structures.
CrowdStrike categorized the methods for gauging AI ROI into four classes, ordered by significance:
- Expense reduction stemming from platform consolidation and enhanced utilization of security tools (31%).
- Decline in security incidents (30%).
- Reduced time spent on maintaining security tools (26%).
- Shortened training periods and related costs (13%).
Integrating AI into an existing platform instead of acquiring a stand-alone AI product could potentially lead to “progressive savings linked with broader platform consolidation initiatives,” as mentioned by CrowdStrike.
SEE: A cybercriminal faction has acknowledged accountability for the late November cyberattack that disrupted business operations at Starbucks and various other institutions.
Could generative AI unveil more security concerns than it resolves?
Conversely, it is important to secure generative AI itself. According to CrowdStrike’s research, security experts were primarily worried about data exposure to the underlying LLMs in AI products and assaults targeted at generative AI tools.
Other concerns encompassed:
- The absence of safeguards or controls in generative AI tools.
- AI hallucinations.
- Inadequate public policy regulations for utilizing generative AI.
Virtually all participants (around 9 out of 10) stated that their organizations have either instituted new security guidelines or are in the process of formulating policies regarding the management of generative AI in the upcoming year.
How firms can harness AI to safeguard against cyber threats
Generative AI can be utilized for brainstorming, exploration, or analysis while bearing in mind that its data often requires validation. Generative AI can aggregate data from diverse sources into a single interface in various layouts, reducing the time taken to investigate an incident. Multiple automated security platforms feature generative AI assistants, such as Microsoft’s Security Copilot.
GenAI can shield against cyber threats by:
- Detection and examination of threats.
- Automated response to incidents.
- Phishing identification.
- Enhanced security analytics.
- Artificial data for training purposes.
Nevertheless, companies should factor in safety and privacy protocols when making a generative AI procurement. Doing so can safeguard confidential information, adhere to regulations, and mitigate risks like data breaches or misuses. In the absence of adequate protections, AI tools might expose vulnerabilities, produce harmful outcomes, or breach privacy statutes, leading to financial, legal, and reputational consequences.
