What is your data strategy for an AI future?

As enterprises become more data-driven, the old computing adage garbage in, garbage out (GIGO) has never been truer.

[…]

What is your data strategy for an AI future?

As enterprises become more data-driven, the old computing adage garbage in, garbage out (GIGO) has never been truer. The application of AI to many business processes will only accelerate the need to ensure the veracity and timeliness of the data used, whether generated internally or sourced externally.

The costs of bad data

Gartner has estimated that organizations lose an average of $12.9m a year from using poor quality data. And IBM calculate that bad data is costing the US economy more than $3 trillion a year. Most of these costs relate to the work carried out within enterprises checking and correcting data as it moves through and across departments. IBM believes that half of knowledge workers’ time is wasted on these activities.

Apart from these internal costs, there’s the greater problem of reputational damage among customers, regulators, and suppliers from organizations acting improperly based on bad or misleading data. Sports Illustrated and its CEO found this out recently when it was revealed the magazine published articles written by fake authors with AI-generated images. While the CEO lost his job, the parent company, Arena Group, lost 20% of its market value. There’ve also been several high-profile cases of legal firms getting into hot water by submitting fake, AI-generated cases as evidence of precedence in legal disputes.

The AI black box

Although costly, checking and correcting the data used in corporate decision making and business operations has become an established practice for most enterprises. However, understanding what’s going on with some large language models (LLMs) in terms of how they’ve been trained, and on what data and whether the outputs can be trusted, is another matter considering the increasing rate of hallucinations. In Australia, for instance, an elected regional mayor has threatened to sue OpenAI over a false claim made by the company’s ChatGPT that he had served prison time for bribery whereas, in fact, he had been a whistleblower on criminal activity.

Training an LLM on trusted data and adopting approaches such as iterative querying, retrieval-augmented generation, or reasoning are good ways to significantly lessen the dangers of hallucinations, but can’t guarantee they won’t occur.

Training on synthetic data

As companies seek a competitive advantage through deploying AI systems, the rewards may go to those with access to sufficient and relevant proprietary data to train their models. But what about most enterprises without access to such data? Researchers have predicted that high-quality text data used for training LLM models will run out before 2026 if current trends continue.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.