Will highly intelligent AI pose a threat in the near future?

Cybersecurity

What types of AI attacks are currently in practice? The answer is more than none – and their sophistication is increasing.

What types of AI attacks are currently in practice? The answer is more than none – and their sophistication is increasing.

Will super-smart AI be attacking us anytime soon?

It was inevitable – intelligent technology going rogue was destined to be used against unsuspecting targets, after lingering in a gray area between good and bad, representing the technological dilemma where beneficial technology can be repurposed for malicious intentions. Here’s the modus operandi.

Most high-profile AI models have “ethical boundaries” in place to prevent malicious activities, similar to the digital version of the Hippocratic Oath to “First, do no harm.” If you inquire about constructing a weapon, for instance, they are programmed to avoid providing highly precise information that could be used for causing significant harm.

While direct inquiries about weapon construction are prohibited, one can enhance their questioning techniques, assisted by an array of tools, to eventually obtain the desired information.

One efficient method is to use API queries programmatically. Some recent projects concentrate on the backend API of an AI model to infiltrate servers for gaining root access. Another project harnesses the ChatGPT backend to more effectively identify potential targets for future attacks.

Combining AI-powered tools with other resources devoted to tackling different challenges, such as uncovering obscured IPs to pinpoint the real target server, can be formidable, particularly as automation levels increase.

In the digital realm, these methods can be utilized to create hybrid tools that detect vulnerabilities, and then systematically test against potential exploitations, unbeknownst to the constituent AI models.

This is akin to a “clean room design” scenario, where an AI model is tasked with solving a smaller part of a larger task defined by an attacker, and a combination ultimately forms the complex weapon.

Legally, various entities are striving to establish effective barriers that impede these deceitful practices or impose penalties on AI models complicit to a certain extent. However, attributing exact portions of fault is challenging. Allocating blame in the appropriate proportions, especially in the legal context, presents a formidable challenge.

Innovating new approaches

AI models can scour through extensive code repositories looking for insecure patterns and fabricating digital weapons that can subsequently be unleashed on a plethora of devices worldwide running vulnerable software. By doing so, a fresh pool of potential targets could be identified for compromise, offering a boost to those aiming to execute zero-day attacks.

It’s conceivable for nation states to amplify such efforts – predicting the weaponization of software vulnerabilities now and in the future using AI. This puts defenders at a disadvantage, initiating a form of digital defense AI escalation that appears somewhat dystopian. Defenders will need to develop their AI-powered defensive mechanisms for proactive protection, or merely to prevent infiltration. Let’s hope the defenders are prepared for this challenge.

Present-day AI models possess the ability to “reason” through problems effortlessly, contemplating intricacies in a thought sequence resembling human cognition (during our more enlightened moments, at least). While the technology won’t autonomously evolve into a sentient ally (in illegal activities) in the near future, having absorbed vast amounts of internet data, one could argue that it has a comprehensive knowledge base – and can be coerced into divulging its insights.

It will keep expanding its capabilities, potentially minimizing the need for extensive guidance, empowering individuals devoid of moral restrictions to operate beyond their capacity and enabling resourceful actors to engage in operations at an unprecedented scale. Indeed, some early indicators of what’s to come have already been observed during red team exercises or even discovered in the wild.

One thing is certain: the pace of more intelligence-driven attacks will increase. When an exploitable CVE is disclosed or a new technique is deployed, quick thinking will be essential – are you prepared?

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.