GenAI adherence is a contradiction. Strategies to maximize its utility

Include humans in the decision-making process
Even though integrating human workers into genAI workflows could potentially decelerate operations and consequently diminish the efficiency for which genAI was initially implemented, Taylor suggested that occ

[…Keep reading]

GenAI compliance is an oxymoron. Ways to make the best of it

Include humans in the decision-making process

Even though integrating human workers into genAI workflows could potentially decelerate operations and consequently diminish the efficiency for which genAI was initially implemented, Taylor suggested that occasional oversight by a human could prove to be beneficial.

He elaborated with an example where a chatbot mistakenly informed an Air Canada customer that they could purchase a ticket immediately and then receive a bereavement credit afterwards, a policy that the airline did not offer. A Canadian civil tribunal determined that the airline was liable for refunding the customer due to the misinformation provided by the chatbot on the company’s website.

“Although introducing a human element in real-time during the chat session may not be technologically viable, as it would defeat the purpose of deploying a chatbot, having a human reviewer post-interaction, perhaps through random sampling, could be constructive,” Taylor explained. “This individual could scrutinize the chatbot’s responses to identify inaccuracies promptly, reach out to impacted users, and refine the solution to minimize the occurrence (hopefully) of such inaccuracies in the future.”

Be ready to engage with regulators in-depth

Another aspect of compliance to consider with genAI is the necessity to furnish regulators with significantly more technical intricacies than CIOs have typically had to provide during regulatory discussions.

“The CIO must be ready to disclose a substantial amount of information, such as elucidating the entire workflow process,” mentioned Anzelc from Three Arc. “Detailing the initial intent, disclosing all underlying data, explaining what actually transpired, and the reasons behind it. Full data lineage. Did genAI veer off course and extract data from an external source or fabricate it? What was the algorithmic framework? That’s where the challenges intensify.”

In the aftermath of an incident, organizations must swiftly implement changes to prevent similar issues from recurring. “This may necessitate a revamp or adjustments to the tool’s functionality or the flow of inputs and outputs. Concomitantly, rectify any oversight in monitoring metrics that were exposed to ensure prompt identification of future problems,” Anzelc advised.

It’s imperative to establish a methodological approach to gauging the repercussions of an incident, she added.

“This could encompass the financial impact on customers, as evidenced in the case of Air Canada’s chatbot, or other compliance-related concerns. Instances include potentially slanderous remarks recently made by X’s chatbot Grok or employee actions like the scenario of the University of Texas professor who flunked an entire class because a generative AI tool erroneously claimed that all assignments had been authored by AI and not human pupils,” Anzelc remarked.

“Comprehend the supplementary compliance ramifications, both from a regulatory outlook as well as the agreements and protocols you have in place with clients, vendors, and staff. You will likely need to reassess the implications as you gain more insights into the root cause of the problem.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.