5 Practical Measures to Stop GenAI Data Breaches Without Completely Prohibiting AI Utilization

Oct 01, 2024The Hacker News
Generative AI / Data Security

From its inception, Generative AI has transformed business productivity.

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024The Hacker News

Generative AI / Data Security

From its inception, Generative AI has transformed business productivity. GenAI applications facilitate quicker and more efficient software development, financial evaluation, strategic planning, and client interaction. However, this operational flexibility also introduces significant vulnerabilities, particularly the risk of confidential data disclosure. As companies strive to balance the advantages of productivity with security apprehensions, many have faced the dilemma of either unrestrained GenAI usage or outright prohibition.

An original e-guide authored by LayerX under the title 5 Workable Approaches to Halt Data Breach via Generative AI Tools aims to assist entities in tackling the challenges associated with GenAI deployment in professional environments. The manual proposes pragmatic strategies for security administrators to safeguard proprietary corporate data while maximizing the advantages of GenAI platforms like ChatGPT. This tactic is intended to enable firms to strike a harmonious equilibrium between creativity and safeguarding data.

Concerns Surrounding ChatGPT

The e-guide discusses the escalating worry that unregulated GenAI utilization might lead to inadvertent data exposure. For instance, incidents such as the Samsung data leakage highlight this risk. In this instance, employees inadvertently disclosed proprietary code while engaging with ChatGPT, prompting a blanket ban on GenAI tools within the company. Such occurrences accentuate the necessity for organizations to outline robust policies and mechanisms to mitigate the dangers related to GenAI platforms.

The understanding of these risks is not merely anecdotal. According to research conducted by LayerX Security:

  • 15% of corporate users have transferred data into GenAI tools.
  • 6% of corporate users have transferred confidential information such as source code, PII, or sensitive business specifics into GenAI platforms.
  • Within the top 5% of GenAI users – the heaviest users – 50% are part of R&D.
  • Source code constitutes the primary category of confidential data exposure, accounting for 31% of the disclosed information.

Critical Actions for Security Administrators

What can security supervisors do to permit GenAI usage while averting data exfiltration perils? Noteworthy suggestions from the e-guide encompass these steps:

  1. Charting AI Adoption within the Corporation – Initiate by comprehending what necessitates protection. Map out who interacts with GenAI tools, the methods employed, the objectives, and the type of data vulnerable to exposure. This will serve as the cornerstone of a potent risk management framework.
  2. Limiting Personal Profiles – Subsequently, leverage the built-in security protocols offered by GenAI platforms. Corporate GenAI profiles embed secure features that can significantly lower the likelihood of confidential data leakage. This includes controlling data designated for training purposes, imposing constraints on data retention, confines on shared accounts, anonymization, among other measures. It’s crucial to mandate the use of non-personal profiles when employing GenAI (which mandates a customized tool to enforce).
  3. Notifying Users – As a third measure, harness the influence of your workforce. Quick alert messages that appear when using GenAI tools will heighten employee awareness regarding the potential ramifications of their actions and organizational regulations. This can actively diminish risky behavior.
  4. Barring Ingress of Sensitive Data – It’s now time to incorporate sophisticated technologies. Implement automated controls to restrict the input of substantial amounts of confidential data on GenAI platforms. This proves especially efficacious in preventing staff from sharing source code, client particulars, PII, financial data, and other sensitive information.
  5. Regulating GenAI Browser Extensions – Finally, mitigate the hazards posed by browser extensions. Automatically monitor and categorize AI browser extensions based on their risk profile to prevent unauthorized access to proprietary corporate data.

In order to fully reap the productivity enhancements introduced by Generative AI, organizations must find the balance between efficiency and security. Consequently, GenAI security should not be seen as a binary decision between permitting all AI actions or prohibiting them entirely. Rather, adopting a more nuanced and precision-focused approach will empower enterprises to leverage the business benefits without exposing themselves to undue risks. For security managers, this approach characterizes the transition into a pivotal business collaborator and facilitator.

Access the guide and swiftly implement these measures today.

Found this article intriguing? This content is a contributed article from one of our esteemed collaborators. Follow us on Twitter and LinkedIn for more exclusive content updates.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.