Tips for Crafting a Creative AI Cybersecurity Strategy

In the midst of all the buzz, Chief Information Security Officers (CISOs) urgently require practical advice on establishing AI security protocols to safeguard their firms as they work hard to catch up with implementations and strategies.

How to Write a Generative AI Cybersecurity Policy

In the midst of all the buzz, Chief Information Security Officers (CISOs) urgently require practical advice on establishing AI security protocols to safeguard their firms as they work hard to catch up with implementations and strategies. By combining the appropriate cybersecurity policy and advanced tools, companies can achieve their current objectives and lay a groundwork for addressing the evolving intricacies of AI in the future.

When the most talented individuals working on a novel technology emphasize that mitigating its dangers should be a worldwide priority, it’s prudent to take note. This event famously occurred on May 30, 2023, when the Center for AI Safety released an open letter cosigned by over 350 scientists and industry leaders cautioning about the gravest potential threats posed by AI.

As highlighted by much of the ensuing media coverage, fearing the hypothetical worst-case scenario could indeed divert attention from addressing the AI risks we are presently encountering, including internal biases and fabricated information. The latter issue hit the news recently when an attorney’s AI-generated legal document was discovered to contain completely invented cases.

Our previous AI blog posts have delved into some of the immediate AI security risks that corporate CISOs need to consider: AI’s capability to mimic humans and execute complex phishing ploys; ambiguities regarding the ownership of data inputted into and produced by public AI platforms; and outright unreliability—encompassing not only inaccurate information generated by AI but also AI that gets ‘contaminated’ by unreliable data it absorbs from the internet and alternative sources.

In my discussions with ChatGPT concerning network security facts after receiving incorrect data, I compelled it to provide the correct answer that it appeared to know all along. While ChatGPT mentions in its Enterprise version features that it does not engage in training with your data, not all staff members and consultants will exclusively use an Enterprise version. Even if a private language AI model is employed, the repercussions of a breach of any AI, whether public or private, need to be considered.

If these are the risks, the next pertinent query would be, “How can CISOs enhance their organizations’ AI security?”

A Strong Policy Forms the Bedrock of AI Security

Cybersecurity leaders in corporate IT learned through experiences over the past ten years that prohibiting the usage of specific software and devices typically has negative repercussions and may even raise risks for the enterprise. If an application or solution is convenient enough—or if the company-approved tools fail to meet all user requirements or desires—people will discover a way to continue using their preferred tools, leading to the issue of shadow IT.

Given that ChatGPT accumulated over 100 million users within just two months of its launch, other innovative AI platforms are already deeply ingrained in people’s workflows. Prohibiting these tools from organizational use might result in a ‘shadow AI’ predicament more dangerous than any previously encountered circumvention. Additionally, many businesses are promoting AI adoption to enhance productivity and might now face challenges in restricting its use. Should the policy choice be the prohibition of unapproved AI, then there must exist methods for detection and possibly prevention.

Hence, what CISOs must do is grant individuals access to AI tools endorsed by sensible usage policies. Instances of such policies are starting to circulate online for sizeable language models like ChatGPT, along with recommendations on evaluating AI security vulnerabilities. However, standard approaches are currently lacking. Although even the IEEE has not fully wrapped its head around this matter, and while online resources’ quality is steadily enhancing, it is not consistently reliable. Any entity seeking AI security policy templates should exercise extreme selectivity.

Four Essential AI Security Policy Factors

Given the described risks, safeguarding the confidentiality and integrity of corporate data should be obvious focal points for AI security. Consequently, any corporate policy should, at the very least:

1. Forbid the transmission of confidential or private data to public AI platforms or external third-party solutions beyond the enterprise’s jurisdiction. “Until further clarification is provided, companies should instruct all employees utilizing ChatGPT and other public generative AI utilities to handle the shared information as though they were placing it on a public forum or social media platform,” as articulated recently by Gartner.

2. Maintain distinct rules of separation for various data categories to prevent data intertwining, ensuring that personally identifiable information and items covered by legal or regulatory safeguards are never integrated with data that can be shared with the public. This could necessitate the establishment of a data classification system for corporate data if one does not already exist.

3. Verify or scrutinize any information generated by an AI platform to ascertain its truth and accuracy. The enterprise’s peril of publicizing AI outputs that are blatantly false is immense, both in terms of reputation and finance. Platforms capable of furnishing citations and references should be mandated to do so, and these sources ought to be authenticated. Otherwise, all declarations featured in AI-generated content should be reviewed before utilization. Gartner warns, “Though [ChatGPT] appears to undertake complex operations, it lacks an understanding of the underlying concepts. It merely provides predictions.”

4. Embrace—and adjust—a zero trust mentality. Zero trust serves as a robust approach to managing the risks related to user, device, and application access to corporate IT resources and data. The notion has gained traction as entities grapple with the dismantlement of traditional corporate network boundaries. Even though AI’s potential to emulate trusted entities will likely challenge zero-trust frameworks, this underscores the increased significance of controlling untrusted connections. The emergence of AI-driven threats underscores the necessity of maintaining a vigilant zero trust stance.

Selecting the Right Solutions

AI security rules can be upheld and enforced with technology. Novel AI tools are being devised to identify AI-generated hoaxes and plots, plagiarized content, and other violations. These tools will eventually be deployed to supervise network activity, serving as radar guns or traffic cameras to expose malicious AI undertakings.

Currently, extended detection and response (XDR) solutions can be utilized to detect anomalous behaviors within the corporate IT environment. XDR leverages AI and machine learning to process large volumes of remotely gathered data for policing network standards at scale. Although not a creative, generative form of AI like ChatGPT, XDR stands as a trained tool adept at executing specific security duties with precision and dependability.

Other monitoring tools like security information and event management (SIEM) systems, application firewalls, and data loss prevention solutions can also be employed to govern user web browsing and software utilization while monitoring the outward flow of information from the company’s IT space to minimize risks and potential data breaches.

Recognizing the Constraints

Besides formulating astute corporate policies for AI security and leveraging existing and innovative tools as they materialize, entities should identify the level of risk they are prepared to endure to capitalize on AI capabilities. An article from the Society for Human Resource Management suggests that organizations formalize their risk tolerance to facilitate determinations about the extent of AI deployment—and its purposes.

The saga of AI is merely commencing, and no one has a definite grasp of what the future holds. What’s obvious is that AI is here to stay and, despite its dangers, offers many advantages if sensibly utilized. Looking ahead, we’ll witness AI itself being employed to combat malevolent uses of AI. However, for now, the best safeguard is to kick off with a considerate and discerning approach.

Further Insights

To discover more Trend Micro intel on AI security, explore the following resources:

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.