The rocky road ahead for AI

Since inception, artificial intelligence (AI) has been changing fast. With the introduction of ChatGPT, DALL-E, and other generative AI tools, 2023 emerged as a year of great progress, putting AI into the hands of the masses.

[…]

The rocky road ahead for AI

Since inception, artificial intelligence (AI) has been changing fast. With the introduction of ChatGPT, DALL-E, and other generative AI tools, 2023 emerged as a year of great progress, putting AI into the hands of the masses. Even in all its glory, we’re also at an inflection point. 

AI will revolutionize industries and augment human capabilities, but it will also raise important ethical questions. We’ll have to think critically about whether easier and faster AI-powered tasks are better—or just easier and faster. Are the same tools high school students are using to write their papers the ones we can rely upon to power enterprise-grade applications? 

The short answer is no, but the hype might lend itself to another story. It’s clear that AI is primed for another landmark year, but it’s how we navigate the challenges it brings that will determine its true value. Here are three potential growing pains business leaders should keep in mind as they embark on their AI journey in 2024. 

LLMs will cause struggles 

Prompt engineering is one thing, but implementing applications of Large Language Models (LLMs) that result in accurate, enterprise-grade results is harder than initially advertised. LLMs have promised to make AI tasks smarter, smoother, and more scalable than ever, but getting them to operate efficiently is a roadblock many businesses will face. While getting started is simple, accuracy and reliability are not yet acceptable for enterprise use. 

Dealing with robustness, fairness, bias, truthfulness, and data leakage takes a lot of work—and all are prerequisites for getting LLMs into production safely. Take healthcare for example. Recent academic research found that GPT models performed poorly in critical tasks, like named entity recognition (NER) and de-identification. In fact,  healthcare-specific model PubMedBERT significantly outperformed both LLM models in NER, relation extraction, and multi-label classification tasks.

Cost is another major concern for GPT models for such tasks. Some LLMs are two orders of magnitude more expensive than smaller models. Continuing on with the healthcare example, with the amount of clinical information to analyze, this significantly reduces the economic viability of GPT-based solutions. And as a result, we’ll unfortunately see many LLM-specific projects stall or fail entirely. 

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.