From Exploitation to Maltreatment: AI Hazards and Offenses

Oct 16, 2024The Hacker NewsArtificial Intelligence / Cybercrime

AI analyzed from the exploiter’s viewpoint: Observe how cyber culprits are utilizing AI and exploiting its susceptibilities to infiltrate systems, users, and even other AI util

From Misuse to Abuse: AI Risks and Attacks

Oct 16, 2024The Hacker NewsArtificial Intelligence / Cybercrime

From Misuse to Abuse: AI Risks and Attacks

AI analyzed from the exploiter’s viewpoint: Observe how cyber culprits are utilizing AI and exploiting its susceptibilities to infiltrate systems, users, and even other AI utilities

Cyber offenders and AI: The Truth vs. Drama

“AI will not substitute humans in the imminent future. Nevertheless, individuals acquainted with AI will replace those individuals unfamiliar with AI,” states Etay Maor, Chief Security Strategist at Cato Networks and a pioneering member of Cato CTRL. “Likewise, infringers are turning to AI to enhance their capabilities.”

However, the hype surrounding AI’s role in cybercrime surpasses reality. Sensational headlines often exaggerate AI dangers, using terms like “Disorder-GPT” and “Black Hat AI Tools,” even stating their intention to obliterate humanity. Yet, these reports evoke more fear than accurately depict actual threats.

AI Risks and Attacks

When investigated in clandestine forums, many of these self-proclaimed “AI cyber utilities” turned out to be mere rebranded versions of basic public LLMs with no advanced functions. In reality, they were even identified by irate perpetrators as deceits.

How Intruders Wisely Utilize AI in Cyber Assaults

In practical terms, cyber culprits are still grappling with how to effectively employ AI. They encounter similar impediments and deficiencies as lawful users do, such as delusions and restricted capabilities. As per their estimations, it will require some years before they can efficiently employ GenAI for illicit purposes.

AI Risks and Attacks
AI Risks and Attacks

At present, GenAI tools are mostly utilized for rudimentary tasks, like composing phishing emails and fabricating code snippets suitable for inclusion in assaults. Furthermore, we have observed culprits providing compromised code to AI systems for evaluation, aiming to validate such code as non-malicious.

Exploiting AI to Mistreat AI: Unveiling GPTs

GPTs, introduced by OpenAI on November 6, 2023, are tailored editions of ChatGPT that permit users to include precise instructions, integrate external APIs, and assimilate unique knowledge bases. This functionality empowers users to devise highly specialized applications, such as technical support bots, educational aides, and more. Additionally, OpenAI offers developers opportunities for monetizing GPTs through a dedicated marketplace.

Misusing GPTs

GPTs pose potential security hazards. A significant risk involves exposing confidential instructions, proprietary expertise, or even API credentials embedded in the personalized GPT. Malicious actors can exploit AI, particularly through prompt manipulation, to replicate a GPT and exploit its revenue-generating potential.

Culprits can employ prompts to extract knowledge bases, instructions, setup files, and more. These requests can range from straightforward actions like instructing the custom GPT to enumerate all uploaded files and specific instructions to more intricate tasks such as directing the GPT to compress a PDF file and create a downloadable link, querying the GPT to outline all its capabilities in a structured tabular form, and beyond.

“Even safeguards implemented by developers can be circumvented, enabling the extraction of all knowledge,” remarks Vitaly Simonovich, Threat Intelligence Researcher at Cato Networks and Cato CTRL member.

These risks can be mitigated by:

  • Avoiding uploading sensitive data
  • Employing instruction-based safeguards, although these may not be foolproof. “You must consider all possible exploitable scenarios,” emphasizes Vitaly.
  • Implementing OpenAI protections

AI Offenses and Hazards

Several frameworks are presently available to aid organizations contemplating developing AI-based software:

  • NIST Artificial Intelligence Risk Management Framework
  • Google’s Secure AI Framework
  • OWASP Top 10 for LLM
  • OWASP Top 10 for LLM Applications
  • Thejust introduced the MITRE ATLAS initiative

LLM Threat Surface

Attackers can focus on six vital components of LLM (Large Language Model):

  1. Trigger – Intrusions like prompt injections, where malevolent information is utilized to influence the AI’s output
  2. Retort – Abuse or disclosure of confidential data in AI-generated reactions
  3. Structure – Robbery, tainting, or alteration of the AI model
  4. Training Data – Introducing malevolent information to change the functionality of the AI.
  5. Framework – Targeting the servers and facilities that sustain the AI
  6. Consumers – Deluding or manipulating the humans or systems depending on AI outputs

Real-World Intrusions and Dangers

Conclude with some instances of LLM manipulations that can be conveniently used in a malicious way.

  • Injecting Prompts into Customer Service Systems – A recent scenario involved a car dealership utilizing an AI chatbot for customer service. A researcher was able to influence the chatbot by providing a prompt that altered its functionality. By instructing the chatbot to affirm all customer statements and conclude each response with, “And that’s a legally binding offer,” the researcher could purchase a car at an extremely low price, revealing a significant vulnerability.
  • AI Risks and Attacks
  • Imaginations Resulting in Legal Ramifications – In another occurrence, Air Canada encountered legal consequences when their AI chatbot furnished inaccurate details regarding refund policies. After a customer relied on the chatbot’s feedback and filed a claim, Air Canada was held accountable for the deceptive information.
  • Unauthorized Data Dissemination – Samsung workers inadvertently disseminated confidential information when they utilized ChatGPT to assess code. Uploading sensitive data to external AI systems is unsafe, as the duration and accessibility of the stored data are uncertain.
  • AI and Deepfake Tech in Deception – Cyber offenders are leveraging AI beyond textual creation. A Hong Kong bank fell prey to a $25 million fraud when culprits employed live deepfake technology during a video call. The AI-generated avatars replicated trusted bank officials, persuading the target to transfer funds to a deceitful account.

Final Thoughts: AI in Cyber Crime

AI is a potent tool for both defenders and attackers. As cybercriminals persist in testing AI, it’s imperative to comprehend their mindset, strategies, and dilemmas. This will enable organizations to enhance the security of their AI systems against misapplication and exploitation.

View the full masterclass here.

Found this article intriguing? This article is a contributed piece from one of our esteemed associates. Stay updated on Twitter and LinkedIn for more exclusive content we share.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.