IT decision-makers are anxious about the rising costs of cybersecurity solutions, which are increasingly saturated with artificial intelligence features. In contrast, cybercriminals seem to be largely avoiding the adoption of AI technologies, as there are scarce discussions on how they could leverage AI capabilities in cybercrime forums.
Based on a survey conducted by security company Sophos among 400 IT security leaders, 80% anticipate a substantial rise in the expenses associated with security tools owing to generative AI. This aligns with a separate study by Gartner, forecasting a nearly 10% increase in global technology expenditure this year primarily due to upgrades in AI infrastructure.
The research by Sophos revealed that 99% of organizations list AI capabilities as a prerequisite for cybersecurity platforms, with the prime objective being enhancement of protection mechanisms. However, a mere 20% of participants identified this as their main driver, indicating a lack of unanimity regarding the necessity of AI tools in security measures.
Seventy-five percent of the respondents expressed challenges in quantifying the additional expenses associated with AI functionalities in their cybersecurity tools. For instance, Microsoft controversially raised the price of Office 365 by 45% this month due to the incorporation of Copilot.
Conversely, 87% of the participants believe that the efficiency savings related to AI will surpass the incremental costs, which could explain why 65% have already embraced security solutions equipped with AI. The launch of budget-friendly AI model DeepSeek R1 has sparked optimism that the prices of AI tools will soon witness a decline universally.
SEE: HackerOne: 48% of Security Professionals Believe AI Is Risky
However, monetary concerns are not the sole point of contention flagged by Sophos’ researchers. A considerable 84% of security leaders are apprehensive that the lofty expectations from AI tools’ capabilities might necessitate downsizing their team. An even higher proportion — 89% — fret over potential glitches in the AI features of the tools, which could act against them and introduce security vulnerabilities.
“Inadequate quality and improperly implemented AI models may inadvertently introduce significant cybersecurity risks on their own, and the principle ‘garbage in, garbage out’ is particularly pertinent to AI,” cautioned the researchers at Sophos.
Cyber offenders are not leveraging AI to the extent often assumed
Security apprehensions could be discouraging cyber miscreants from embracing AI to the anticipated degree, according to additional studies from Sophos. Despite predictions by analysts, the investigation unveiled that AI has not achieved widespread application in cyberattacks. To gauge the incorporation of AI capabilities within the hacking community, Sophos scrutinized content on clandestine forums.
The study identified less than 150 discussions on GPTs or large language models in the past year. For perspective, there were over 1,000 threads on cryptocurrency and upwards of 600 threads related to the procurement and vendition of network accesses.
“A majority of threat actors on the cybercrime platforms we scrutinized do not seem notably enthusiastic or intrigued by generative AI, and there is no evidence of malefactors leveraging it to craft new exploits or malware,” noted the researchers at Sophos.
Since 2019, a Russian-language criminal platform has maintained a dedicated AI section, but with just 300 threads in comparison to more than 700 and 1,700 threads in the malware and network access sections, respectively. Yet, the scholars pointed out that this might indicate “relatively swift progression for a topic that only gained widespread recognition in the last two years.”
Nonetheless, in one discussion, a user confessed to engaging with a GPT for social purposes to combat loneliness rather than orchestrating a cyber offense. Another user cautioned that this action is “detrimental to your opsec [operational security],” underscoring the community’s skepticism towards the technology.
Hackers employing AI for spamming, reconnaissance, and social manipulation
Content and threads referencing AI deploy it in practices such as spamming, sourcing intelligence from open databases, and social manipulation; the latter encompasses the utilization of GPTs for creating phishing emails and unsolicited messages.
Security provider Vipre observed a 20% surge in business email compromise attacks in Q2 of 2024 compared to the same period in 2023; AI contributed to 40% of those BEC assaults.
Other posts delve into “jailbreaking,” where models are directed to bypass security measures with a meticulously crafted prompt. Malicious chatbots, tailored explicitly for cybercrime, have been prevalent since 2023. While models like WormGPT are in operation, recent models such as GhostGPT are still emerging.
Only a few efforts deemed “rudimentary and subpar” to create malware, attack tools, and exploits using AI were encountered by Sophos researchers on the forums. Such occurrences are not unusual; in June, HP intercepted an email campaign disseminating malware in the wild with a script that “highly likely was drafted with the assistance of GenAI.”
Conversations involving AI-generated code often carried tones of sarcasm or criticism. For instance, in a thread containing purportedly manually written code, a user retorted, “Is this put together with ChatGPT or something… this code is essentially non-functional.” The researchers at Sophos highlighted that the predominant sentiment is using AI to create malware is for “apathetic and/or underqualified individuals seeking shortcuts.”
Interestingly, a few discussions alluded to the creation of AI-powered malware as an aspiration, indicating an interest in utilizing the technology for offensive activities once it becomes available. A post titled “The world’s first AI-powered autonomous C2” included the acknowledgment that “this is simply a figment of my imagination for now.”
“Some users are also using AI to automate mundane tasks,” the researchers remarked. “But the consensus appears to be that most do not rely on it for anything more intricate.”
