The real business impact of AI on cybersecurity

The incorporation of AI into cybersecurity has become deeply ingrained. Witness AI as the primary focus of attention at any cybersecurity gathering or exhibition.

Beyond the hype: The business reality of AI for cybersecurity

The incorporation of AI into cybersecurity has become deeply ingrained. Witness AI as the primary focus of attention at any cybersecurity gathering or exhibition. Cybersecurity providers throughout the industry emphasize the integration of AI into their offerings. The message is clear: AI is a crucial component of effective cyber defense strategies.

In a landscape where AI is pervasive, it’s common to believe that AI is always the solution and consistently results in improved cybersecurity outcomes. However, the truth is far more nuanced.

This analysis delves into the utilization of AI in cybersecurity, particularly focusing on generative AI. It sheds light on AI adoption rates, the anticipated advantages, and the levels of awareness concerning associated risks, based on insights gleaned from a vendor-neutral survey of 400 IT and cybersecurity leaders operating within small to mid-sized organizations (50-3,000 employees). It also uncovers a significant blind spot in the realm of using AI for cyber defenses.

The survey results serve as a practical standard for organizations assessing their own cybersecurity strategies. They also serve as a timely reminder of the potential risks related to AI, guiding organizations in leveraging AI in a secure manner to enhance their cybersecurity posture.

AI jargon

AI encapsulates a variety of capabilities that can bolster and expedite cybersecurity efforts in numerous ways. Two prevalent AI methodologies employed in cybersecurity are deep learning models and generative AI.

  • Deep learning (DL) models APPLY acquired knowledge to execute tasks. For instance, suitably trained DL models can swiftly discern whether a file is malicious or benign, even if they haven’t encountered that specific file previously.
  • Generative AI (GenAI) models process inputs to CREATE novel content. For instance, in expediting security operations, GenAI can compile a narrative summarizing past threat activities in natural language and suggest the subsequent steps for the analyst.

AI isn’t a one-size-fits-all solution, with models varying significantly in complexity.

  • Enormous Models, like Microsoft Copilot and Google Gemini, are extensive language models (LLMs) trained on vast datasets, adept at a wide array of tasks.
  • Compact models are typically tailored and trained on specific datasets to fulfill singular tasks, such as detecting malicious URLs or executables.

AI terminology graphic

AI integration for cybersecurity

The survey discloses that AI is now extensively integrated into the cybersecurity frameworks of most organizations, as 98% confirm their usage in some capacity:

Does your organization currently use AI technologies as part of your cyber defenses? (n=400)

The near-universal embrace of AI is on the horizon, as AI capabilities are now on the checklist of 99% of organizations when deciding on a cybersecurity platform:

How important are AI capabilities when selecting a cybersecurity platform? (n=400)
How important are AI capabilities when selecting a cybersecurity platform? (n=400)

Given this extensive adoption and future implementation, it is imperative for organizations of all sizes and focuses to grasp the risks associated with AI in cybersecurity and the corresponding mitigation strategies.

GenAI anticipations

The pervasiveness of GenAI messaging across cybersecurity and broader business and personal spheres has heightened expectations regarding how this technology can elevate cybersecurity performances. The survey highlights the primary benefit that organizations anticipate from GenAI capabilities in cybersecurity tools, as depicted below.

Top desired benefit from GenAI in cybersecurity tools
What benefits, if any, do you want generative AI capabilities in cybersecurity tools to deliver? Responses ranked first.(n=400)

The diverse array of responses indicates that there isn’t a singular, standout benefit that organizations seek from GenAI in cybersecurity. Nonetheless, the most prevalent expected gains revolve around enhanced cybersecurity or business performance, both financially and operationally. The data also suggests that integrating GenAI capabilities into cybersecurity solutions instills a sense of security and confidence that an organization is abreast of the latest protective measures.

The fact that reducing employee burnout ranks lowest in the hierarchy implies that organizations may be less conscious of, or lessworried about the possibility for GenAI to assist users. Given the shortage of cybersecurity personnel, decreasing turnover is a crucial focus area where AI can provide assistance.

Preferred GenAI advantages shift with size of the organization

The primary sought-after benefit from GenAI in cybersecurity tools varies as organizations grow in size, likely indicative of their unique challenges.

What benefits, if any, do you desire generative AI capabilities in cybersecurity tools to provide? Responses ranked first.(n=400)

Despite employee burnout reduction ranking the lowest overall, it emerged as the foremost desired advantage for small enterprises with 50-99 employees. This could be due to the disproportionate impact of employee absence on smaller organizations, which may lack substitute staff.

In contrast, underscoring their necessity for stringent financial discipline, organizations with 100-249 employees prioritize enhanced return on cybersecurity expenditure. Larger enterprises with 1,000-3,000 employees highly prioritize better protection against cyber threats.

Awareness of AI risk

Despite the numerous benefits AI offers, like any technological capability, it also presents various risks. The survey uncovered differing levels of awareness regarding these potential downsides.

Risk in Defense: Inferior quality and improperly implemented AI

Given the emphasis on enhanced protection from cyber threats as a primary desired benefit from GenAI, it is evident that reducing cybersecurity risk is a significant driver for adopting AI-powered defense solutions.

However, subpar quality and improperly implemented AI models can inadvertently introduce substantial cybersecurity risk of their own, with the old adage “garbage in, garbage out” proving particularly relevant to AI. Creating effective AI models for cybersecurity necessitates a comprehensive understanding of both threats and AI.

Organizations are largely cognizant of the risk associated with poorly developed and deployed AI in cybersecurity solutions. The vast majority (89%) of IT/cybersecurity professionals surveyed express concerns about the potential for flaws in cybersecurity tools’ generative AI capabilities to impact their organization, with 43% indicating extreme concern and 46% somewhat concerned.

Percentage concerned about GenAI in security products causing harm
Focusing on the use of AI in cybersecurity solutions, to what extent are you concerned about the potential for flaws in the Generative AI capabilities in cybersecurity tools to harm your organization? n=(400)

Thus, it is unsurprising that 99% (with rounding) of organizations indicate that when evaluating the GenAI capabilities in cybersecurity solutions, they scrutinize the quality of the cybersecurity processes and controls employed in the development of the GenAI: 73% fully assess the quality of the cybersecurity processes and controls while 27% partially assess the quality of the cybersecurity processes and controls.

Percentage that assess the caliber of GenAI in tools
When evaluating the Generative AI capabilities in cybersecurity solutions, does your organization assess the caliber of the cybersecurity processes and controls used in the development of the Generative AI? (n=390)

Though the high percentage reporting a comprehensive evaluation might seem promising initially, it implies a significant blind spot exists in this domain across many organizations.

Evaluating the processes and controls employed in developing GenAI capabilities necessitates transparency from the vendor and a reasonable level of AI expertise from the evaluator. Regrettably, both are in short supply. Providers seldom make their complete GenAI development processes readily accessible, and IT teams often possess limited insights into optimal AI development practices. For numerous organizations, this revelation indicates they are “unaware of their ignorance.”

Financial Risk: Inadequate Return on Investment

As previously shown, improved return on cybersecurity spending (ROI) also leads the array of benefits organizations aim to realize through GenAI.

GenAI capabilities of high quality in cybersecurity solutions require significant investment for development and upkeep. IT and cybersecurity leaders across organizations of all sizes acknowledge the potential consequences of this development outlay, with 80% believing GenAI will substantially raise the costs of their cybersecurity products.

Despite anticipating price hikes, most organizations view GenAI as a means to lower their overall cybersecurity expenses, with 87% of respondents expressing confidence that the costs associated with GenAI in cybersecurity tools will be completely offset by the efficiencies it brings.

Upon deeper inspection, it becomes apparent that the confidence in achieving a positive return on investment grows with annual revenue, with the largest organizations ($500M+) being 48% more inclined to agree or strongly agree that the costs of generative AI in cybersecurity tools will be fully offset by the savings it generates compared to the smallest organizations (less than $10M).

Percentage thinking savings will offset gen AI costs split by revenue
Considerations regarding the expense of Generative AI capabilities; do you agree or disagree with the following assertions within your entity: The expenditures associated with Generative AI in cybersecurity tools will be entirely balanced by the cost savings it offers. Strongly concur, Concur. (n=400)

Simultaneously, entities acknowledge that quantifying these expenses presents a difficulty. GenAI costs are commonly integrated within the comprehensive pricing of cybersecurity products and services, complicating the task of determining the precise amount entities are investing in GenAI for cybersecurity. Reflecting this lack of clarity, 75% acknowledge that these expenses are challenging to gauge (39% strongly acknowledge, 36% somewhat acknowledge).

Generally, obstacles in quantifying the costs also escalate with revenue: entities with $500M+ annual revenue are 40% more prone to finding the costs hard to quantify than those with revenue below $10M. This variability is likely attributable in part to the inclination for larger entities to possess more intricate and extensive IT and cybersecurity infrastructures.

Percentage challenged to measure costs of GenAI split by revenue
Assessing the expense of Generative AI capabilities; to what extent do you agree or disagree with the following statements within your entity: The costs of the Generative AI capabilities available in cybersecurity products are complicated to ascertain. Strongly concur, Concur. (n=400)

Devoid of efficient reporting, entities risk failing to observe the desired return on their AI investments for cybersecurity, or worse, channeling investments into AI that could have been more productively allocated elsewhere.

Hazards of Operational Nature: Dependence on AI

The pervasive prevalence of AI facilitates a tendency to overly rely on AI, presume it is consistently precise, and take for granted that AI surpasses humans in certain tasks. Fortunately, the majority of entities are cognizant of and apprehensive about the cybersecurity repercussions of excessive dependence on AI:

  • 84% express concerns about the resultant pressure to minimize cybersecurity personnel headcount (42% extremely concerned, 41% somewhat concerned)
  • 87% express concerns about the resulting lack of cybersecurity responsibility (37% extremely concerned, 50% somewhat concerned)

These concerns are broadly experienced, with consistently high percentages reported by participants across all caliber brackets and business sectors.

Guidance and Recommendations

Even though AI introduces risks, entities can effectively navigate them and securely leverage AI to fortify their cyber defenses and overall business outcomes through a deliberate approach.

The suggestions offer an initial framework to assist entities in mitigating the hazards discussed in this analysis.

Inquire About Vendors’ AI Development Procedures

  • Educational Data. What are the quality, quantity, and origin of data used to train the models? Enhanced inputs lead to superior outputs.
  • Team of Developers. Discover information about the individuals behind the models. What caliber of AI proficiency do they possess? How knowledgeable are they in threats, adversary behaviors, and security operations?
  • Product Development and Deployment Process. What processes does the vendor follow when formulating and launching AI capabilities in their solutions? What safeguards and checks are in place?

Employ Business Precision in AI Investment Decisions

  • Establish Objectives. Clearly define, detail, and specify the outcomes you anticipate AI will yield.
  • Quantify Advantages. Comprehend the magnitude of the impact AI investments will have.
  • Rank Investments. AI can provide support in various capacities; several will generate a more significant effect than others. Identify vital metrics for your entity – financial savings, personnel retention implications, exposure reduction, and more – and evaluate how these alternatives compare.
  • Evaluate Impact. Ensure actual performance corresponds with initial expectations. Utilize the insights to make any necessary adjustments.

View AI Through a Human-Centric Perspective

  • Maintain Balance. AI represents just one element in the cybersecurity toolbox. Utilize it, but emphasize that cybersecurity responsibility ultimately rests with humans.
  • Enhance, Not Replace. Concentrate on how AI can support your staff by managing numerous routine security tasks and offering guided insights.

Survey Overview

Sophos engaged independent research authority Vanson Bourne to conduct a survey among 400 IT security decision-makers in entities employing between 50 and 3,000 personnel during November 2024. All respondents held positions in private or charitable/non-profit sectors and currently utilized endpoint security solutions from 19 distinct vendors and 14 MDR providers.

Sophos’ AI-Driven Cyber Defensive Strategies

For nearly a decade, Sophos has been at the forefront of AI-facilitated cybersecurity. The collaboration between AI technologies and human cybersecurity proficiency functions to counteract a wide array of threats, regardless of their origin. AI capabilities are integrated across all Sophos products and services, delivered through the leading AI-focused platform in the sector. To delve deeper into Sophos’ AI-fueled cyber defense mechanisms, visit www.sophos.com/ai

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.